00:00:00.001 Started by upstream project "autotest-per-patch" build number 122865 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.106 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.107 The recommended git tool is: git 00:00:00.107 using credential 00000000-0000-0000-0000-000000000002 00:00:00.109 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.171 Fetching changes from the remote Git repository 00:00:00.173 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.210 Using shallow fetch with depth 1 00:00:00.210 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.210 > git --version # timeout=10 00:00:00.239 > git --version # 'git version 2.39.2' 00:00:00.239 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.240 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.240 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.551 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.563 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.575 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:06.575 > git config core.sparsecheckout # timeout=10 00:00:06.586 > git read-tree -mu HEAD # timeout=10 00:00:06.604 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:06.624 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:06.624 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:06.703 [Pipeline] Start of Pipeline 00:00:06.713 [Pipeline] library 00:00:06.714 Loading library shm_lib@master 00:00:06.715 Library shm_lib@master is cached. Copying from home. 00:00:06.733 [Pipeline] node 00:00:06.740 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.745 [Pipeline] { 00:00:06.754 [Pipeline] catchError 00:00:06.755 [Pipeline] { 00:00:06.765 [Pipeline] wrap 00:00:06.772 [Pipeline] { 00:00:06.780 [Pipeline] stage 00:00:06.781 [Pipeline] { (Prologue) 00:00:06.944 [Pipeline] sh 00:00:07.239 + logger -p user.info -t JENKINS-CI 00:00:07.258 [Pipeline] echo 00:00:07.260 Node: WFP8 00:00:07.267 [Pipeline] sh 00:00:07.600 [Pipeline] setCustomBuildProperty 00:00:07.611 [Pipeline] echo 00:00:07.612 Cleanup processes 00:00:07.616 [Pipeline] sh 00:00:07.905 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.905 26062 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.919 [Pipeline] sh 00:00:08.211 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.211 ++ grep -v 'sudo pgrep' 00:00:08.211 ++ awk '{print $1}' 00:00:08.211 + sudo kill -9 00:00:08.211 + true 00:00:08.226 [Pipeline] cleanWs 00:00:08.236 [WS-CLEANUP] Deleting project workspace... 00:00:08.237 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.243 [WS-CLEANUP] done 00:00:08.248 [Pipeline] setCustomBuildProperty 00:00:08.263 [Pipeline] sh 00:00:08.554 + sudo git config --global --replace-all safe.directory '*' 00:00:08.631 [Pipeline] nodesByLabel 00:00:08.633 Found a total of 1 nodes with the 'sorcerer' label 00:00:08.646 [Pipeline] httpRequest 00:00:08.942 HttpMethod: GET 00:00:08.943 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:09.677 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:09.936 Response Code: HTTP/1.1 200 OK 00:00:09.996 Success: Status code 200 is in the accepted range: 200,404 00:00:09.997 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:21.552 [Pipeline] sh 00:00:21.838 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:21.852 [Pipeline] httpRequest 00:00:21.857 HttpMethod: GET 00:00:21.857 URL: http://10.211.164.101/packages/spdk_f0bf11db48eedd08b9c2ba933ea8f3595a4c9594.tar.gz 00:00:21.859 Sending request to url: http://10.211.164.101/packages/spdk_f0bf11db48eedd08b9c2ba933ea8f3595a4c9594.tar.gz 00:00:21.867 Response Code: HTTP/1.1 200 OK 00:00:21.867 Success: Status code 200 is in the accepted range: 200,404 00:00:21.867 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_f0bf11db48eedd08b9c2ba933ea8f3595a4c9594.tar.gz 00:04:49.329 [Pipeline] sh 00:04:49.618 + tar --no-same-owner -xf spdk_f0bf11db48eedd08b9c2ba933ea8f3595a4c9594.tar.gz 00:04:52.171 [Pipeline] sh 00:04:52.457 + git -C spdk log --oneline -n5 00:04:52.457 f0bf11db4 nvmf/auth: execute DH-HMAC-CHAP_reply message 00:04:52.457 2b14ffc34 nvmf: method for getting DH-HMAC-CHAP keys 00:04:52.457 091d58775 nvme: make spdk_nvme_dhchap_calculate() public 00:04:52.457 2c8f92576 nvmf/auth: send DH-HMAC-CHAP_challenge message 00:04:52.457 c06b0c79b nvmf: make allow_any_host its own byte 00:04:52.469 [Pipeline] } 00:04:52.485 [Pipeline] // stage 00:04:52.493 [Pipeline] stage 00:04:52.495 [Pipeline] { (Prepare) 00:04:52.512 [Pipeline] writeFile 00:04:52.527 [Pipeline] sh 00:04:52.812 + logger -p user.info -t JENKINS-CI 00:04:52.825 [Pipeline] sh 00:04:53.105 + logger -p user.info -t JENKINS-CI 00:04:53.117 [Pipeline] sh 00:04:53.403 + cat autorun-spdk.conf 00:04:53.403 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:53.403 SPDK_TEST_NVMF=1 00:04:53.403 SPDK_TEST_NVME_CLI=1 00:04:53.403 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:53.403 SPDK_TEST_NVMF_NICS=e810 00:04:53.403 SPDK_TEST_VFIOUSER=1 00:04:53.403 SPDK_RUN_UBSAN=1 00:04:53.403 NET_TYPE=phy 00:04:53.411 RUN_NIGHTLY=0 00:04:53.415 [Pipeline] readFile 00:04:53.436 [Pipeline] withEnv 00:04:53.438 [Pipeline] { 00:04:53.451 [Pipeline] sh 00:04:53.737 + set -ex 00:04:53.737 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:04:53.737 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:53.737 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:53.737 ++ SPDK_TEST_NVMF=1 00:04:53.737 ++ SPDK_TEST_NVME_CLI=1 00:04:53.737 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:53.737 ++ SPDK_TEST_NVMF_NICS=e810 00:04:53.737 ++ SPDK_TEST_VFIOUSER=1 00:04:53.737 ++ SPDK_RUN_UBSAN=1 00:04:53.737 ++ NET_TYPE=phy 00:04:53.737 ++ RUN_NIGHTLY=0 00:04:53.737 + case $SPDK_TEST_NVMF_NICS in 00:04:53.737 + DRIVERS=ice 00:04:53.737 + [[ tcp == \r\d\m\a ]] 00:04:53.737 + [[ -n ice ]] 00:04:53.737 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:04:53.737 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:04:57.032 rmmod: ERROR: Module irdma is not currently loaded 00:04:57.032 rmmod: ERROR: Module i40iw is not currently loaded 00:04:57.032 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:04:57.032 + true 00:04:57.032 + for D in $DRIVERS 00:04:57.032 + sudo modprobe ice 00:04:57.032 + exit 0 00:04:57.043 [Pipeline] } 00:04:57.060 [Pipeline] // withEnv 00:04:57.065 [Pipeline] } 00:04:57.082 [Pipeline] // stage 00:04:57.090 [Pipeline] catchError 00:04:57.092 [Pipeline] { 00:04:57.105 [Pipeline] timeout 00:04:57.105 Timeout set to expire in 40 min 00:04:57.106 [Pipeline] { 00:04:57.118 [Pipeline] stage 00:04:57.120 [Pipeline] { (Tests) 00:04:57.136 [Pipeline] sh 00:04:57.426 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:57.426 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:57.426 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:57.426 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:04:57.426 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:57.426 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:57.426 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:04:57.426 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:57.426 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:57.426 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:57.426 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:57.426 + source /etc/os-release 00:04:57.426 ++ NAME='Fedora Linux' 00:04:57.426 ++ VERSION='38 (Cloud Edition)' 00:04:57.426 ++ ID=fedora 00:04:57.426 ++ VERSION_ID=38 00:04:57.426 ++ VERSION_CODENAME= 00:04:57.426 ++ PLATFORM_ID=platform:f38 00:04:57.426 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:04:57.426 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:57.426 ++ LOGO=fedora-logo-icon 00:04:57.426 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:04:57.426 ++ HOME_URL=https://fedoraproject.org/ 00:04:57.426 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:04:57.426 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:57.426 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:57.426 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:57.426 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:04:57.426 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:57.426 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:04:57.426 ++ SUPPORT_END=2024-05-14 00:04:57.426 ++ VARIANT='Cloud Edition' 00:04:57.426 ++ VARIANT_ID=cloud 00:04:57.426 + uname -a 00:04:57.426 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:04:57.426 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:59.971 Hugepages 00:04:59.971 node hugesize free / total 00:04:59.971 node0 1048576kB 0 / 0 00:04:59.971 node0 2048kB 0 / 0 00:04:59.971 node1 1048576kB 0 / 0 00:04:59.971 node1 2048kB 0 / 0 00:04:59.971 00:04:59.971 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:59.971 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:59.971 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:59.971 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:59.971 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:59.971 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:59.971 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:59.971 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:59.971 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:59.971 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:59.971 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:59.971 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:59.971 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:59.971 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:59.971 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:59.971 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:59.971 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:59.972 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:59.972 + rm -f /tmp/spdk-ld-path 00:04:59.972 + source autorun-spdk.conf 00:04:59.972 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:59.972 ++ SPDK_TEST_NVMF=1 00:04:59.972 ++ SPDK_TEST_NVME_CLI=1 00:04:59.972 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:59.972 ++ SPDK_TEST_NVMF_NICS=e810 00:04:59.972 ++ SPDK_TEST_VFIOUSER=1 00:04:59.972 ++ SPDK_RUN_UBSAN=1 00:04:59.972 ++ NET_TYPE=phy 00:04:59.972 ++ RUN_NIGHTLY=0 00:04:59.972 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:59.972 + [[ -n '' ]] 00:04:59.972 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:59.972 + for M in /var/spdk/build-*-manifest.txt 00:04:59.972 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:59.972 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:59.972 + for M in /var/spdk/build-*-manifest.txt 00:04:59.972 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:59.972 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:59.972 ++ uname 00:04:59.972 + [[ Linux == \L\i\n\u\x ]] 00:04:59.972 + sudo dmesg -T 00:04:59.972 + sudo dmesg --clear 00:04:59.972 + dmesg_pid=27999 00:04:59.972 + [[ Fedora Linux == FreeBSD ]] 00:04:59.972 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:59.972 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:59.972 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:59.972 + sudo dmesg -Tw 00:04:59.972 + [[ -x /usr/src/fio-static/fio ]] 00:04:59.972 + export FIO_BIN=/usr/src/fio-static/fio 00:04:59.972 + FIO_BIN=/usr/src/fio-static/fio 00:04:59.972 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:59.972 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:59.972 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:59.972 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:59.972 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:59.972 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:59.972 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:59.972 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:59.972 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:59.972 Test configuration: 00:04:59.972 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:59.972 SPDK_TEST_NVMF=1 00:04:59.972 SPDK_TEST_NVME_CLI=1 00:04:59.972 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:59.972 SPDK_TEST_NVMF_NICS=e810 00:04:59.972 SPDK_TEST_VFIOUSER=1 00:04:59.972 SPDK_RUN_UBSAN=1 00:04:59.972 NET_TYPE=phy 00:04:59.972 RUN_NIGHTLY=0 08:16:46 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:59.972 08:16:46 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:00.232 08:16:46 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.232 08:16:46 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.232 08:16:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.232 08:16:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.232 08:16:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.232 08:16:46 -- paths/export.sh@5 -- $ export PATH 00:05:00.232 08:16:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.232 08:16:46 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:00.232 08:16:46 -- common/autobuild_common.sh@437 -- $ date +%s 00:05:00.232 08:16:47 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715753807.XXXXXX 00:05:00.232 08:16:47 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715753807.WxfllI 00:05:00.232 08:16:47 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:05:00.232 08:16:47 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:05:00.232 08:16:47 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:05:00.232 08:16:47 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:05:00.232 08:16:47 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:05:00.232 08:16:47 -- common/autobuild_common.sh@453 -- $ get_config_params 00:05:00.232 08:16:47 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:05:00.232 08:16:47 -- common/autotest_common.sh@10 -- $ set +x 00:05:00.232 08:16:47 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:05:00.232 08:16:47 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:05:00.232 08:16:47 -- pm/common@17 -- $ local monitor 00:05:00.232 08:16:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.232 08:16:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.232 08:16:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.232 08:16:47 -- pm/common@21 -- $ date +%s 00:05:00.232 08:16:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.233 08:16:47 -- pm/common@21 -- $ date +%s 00:05:00.233 08:16:47 -- pm/common@25 -- $ sleep 1 00:05:00.233 08:16:47 -- pm/common@21 -- $ date +%s 00:05:00.233 08:16:47 -- pm/common@21 -- $ date +%s 00:05:00.233 08:16:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715753807 00:05:00.233 08:16:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715753807 00:05:00.233 08:16:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715753807 00:05:00.233 08:16:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715753807 00:05:00.233 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715753807_collect-cpu-temp.pm.log 00:05:00.233 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715753807_collect-vmstat.pm.log 00:05:00.233 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715753807_collect-cpu-load.pm.log 00:05:00.233 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715753807_collect-bmc-pm.bmc.pm.log 00:05:01.174 08:16:48 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:05:01.174 08:16:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:01.174 08:16:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:01.174 08:16:48 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:01.174 08:16:48 -- spdk/autobuild.sh@16 -- $ date -u 00:05:01.174 Wed May 15 06:16:48 AM UTC 2024 00:05:01.174 08:16:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:01.174 v24.05-pre-628-gf0bf11db4 00:05:01.174 08:16:48 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:01.174 08:16:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:01.174 08:16:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:01.174 08:16:48 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:05:01.174 08:16:48 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:05:01.174 08:16:48 -- common/autotest_common.sh@10 -- $ set +x 00:05:01.174 ************************************ 00:05:01.174 START TEST ubsan 00:05:01.174 ************************************ 00:05:01.174 08:16:48 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:05:01.174 using ubsan 00:05:01.174 00:05:01.174 real 0m0.000s 00:05:01.174 user 0m0.000s 00:05:01.174 sys 0m0.000s 00:05:01.174 08:16:48 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:05:01.174 08:16:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:01.174 ************************************ 00:05:01.174 END TEST ubsan 00:05:01.174 ************************************ 00:05:01.174 08:16:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:01.174 08:16:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:01.174 08:16:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:01.174 08:16:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:01.174 08:16:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:01.174 08:16:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:01.174 08:16:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:01.174 08:16:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:01.174 08:16:48 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:05:01.744 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:01.744 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:02.682 Using 'verbs' RDMA provider 00:05:18.516 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:05:30.738 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:05:30.738 Creating mk/config.mk...done. 00:05:30.738 Creating mk/cc.flags.mk...done. 00:05:30.738 Type 'make' to build. 00:05:30.738 08:17:16 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:05:30.738 08:17:16 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:05:30.738 08:17:16 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:05:30.738 08:17:16 -- common/autotest_common.sh@10 -- $ set +x 00:05:30.738 ************************************ 00:05:30.738 START TEST make 00:05:30.738 ************************************ 00:05:30.738 08:17:16 make -- common/autotest_common.sh@1121 -- $ make -j96 00:05:30.738 make[1]: Nothing to be done for 'all'. 00:05:31.675 The Meson build system 00:05:31.675 Version: 1.3.1 00:05:31.675 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:05:31.675 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:31.675 Build type: native build 00:05:31.675 Project name: libvfio-user 00:05:31.675 Project version: 0.0.1 00:05:31.675 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:05:31.675 C linker for the host machine: cc ld.bfd 2.39-16 00:05:31.675 Host machine cpu family: x86_64 00:05:31.675 Host machine cpu: x86_64 00:05:31.675 Run-time dependency threads found: YES 00:05:31.675 Library dl found: YES 00:05:31.675 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:05:31.675 Run-time dependency json-c found: YES 0.17 00:05:31.675 Run-time dependency cmocka found: YES 1.1.7 00:05:31.675 Program pytest-3 found: NO 00:05:31.675 Program flake8 found: NO 00:05:31.675 Program misspell-fixer found: NO 00:05:31.675 Program restructuredtext-lint found: NO 00:05:31.675 Program valgrind found: YES (/usr/bin/valgrind) 00:05:31.675 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:31.675 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:31.675 Compiler for C supports arguments -Wwrite-strings: YES 00:05:31.675 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:31.675 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:05:31.675 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:05:31.675 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:31.675 Build targets in project: 8 00:05:31.675 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:05:31.675 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:05:31.675 00:05:31.675 libvfio-user 0.0.1 00:05:31.675 00:05:31.675 User defined options 00:05:31.675 buildtype : debug 00:05:31.675 default_library: shared 00:05:31.675 libdir : /usr/local/lib 00:05:31.675 00:05:31.675 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:31.933 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:31.933 [1/37] Compiling C object samples/null.p/null.c.o 00:05:31.933 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:05:31.933 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:05:31.933 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:05:31.934 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:05:31.934 [6/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:05:31.934 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:05:31.934 [8/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:05:31.934 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:05:31.934 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:05:31.934 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:05:31.934 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:05:31.934 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:05:31.934 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:05:32.192 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:05:32.192 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:05:32.192 [17/37] Compiling C object samples/server.p/server.c.o 00:05:32.192 [18/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:05:32.192 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:05:32.192 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:05:32.192 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:05:32.192 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:05:32.192 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:05:32.192 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:05:32.192 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:05:32.192 [26/37] Compiling C object samples/client.p/client.c.o 00:05:32.192 [27/37] Linking target samples/client 00:05:32.192 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:05:32.192 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:05:32.192 [30/37] Linking target test/unit_tests 00:05:32.192 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:05:32.452 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:05:32.452 [33/37] Linking target samples/server 00:05:32.452 [34/37] Linking target samples/lspci 00:05:32.452 [35/37] Linking target samples/null 00:05:32.452 [36/37] Linking target samples/shadow_ioeventfd_server 00:05:32.452 [37/37] Linking target samples/gpio-pci-idio-16 00:05:32.452 INFO: autodetecting backend as ninja 00:05:32.452 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:32.452 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:32.711 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:32.711 ninja: no work to do. 00:05:37.991 The Meson build system 00:05:37.991 Version: 1.3.1 00:05:37.991 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:05:37.991 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:05:37.991 Build type: native build 00:05:37.991 Program cat found: YES (/usr/bin/cat) 00:05:37.991 Project name: DPDK 00:05:37.991 Project version: 23.11.0 00:05:37.991 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:05:37.991 C linker for the host machine: cc ld.bfd 2.39-16 00:05:37.991 Host machine cpu family: x86_64 00:05:37.991 Host machine cpu: x86_64 00:05:37.991 Message: ## Building in Developer Mode ## 00:05:37.991 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:37.991 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:37.991 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:37.991 Program python3 found: YES (/usr/bin/python3) 00:05:37.991 Program cat found: YES (/usr/bin/cat) 00:05:37.991 Compiler for C supports arguments -march=native: YES 00:05:37.991 Checking for size of "void *" : 8 00:05:37.991 Checking for size of "void *" : 8 (cached) 00:05:37.991 Library m found: YES 00:05:37.991 Library numa found: YES 00:05:37.991 Has header "numaif.h" : YES 00:05:37.991 Library fdt found: NO 00:05:37.991 Library execinfo found: NO 00:05:37.991 Has header "execinfo.h" : YES 00:05:37.991 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:05:37.991 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:37.991 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:37.992 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:37.992 Run-time dependency openssl found: YES 3.0.9 00:05:37.992 Run-time dependency libpcap found: YES 1.10.4 00:05:37.992 Has header "pcap.h" with dependency libpcap: YES 00:05:37.992 Compiler for C supports arguments -Wcast-qual: YES 00:05:37.992 Compiler for C supports arguments -Wdeprecated: YES 00:05:37.992 Compiler for C supports arguments -Wformat: YES 00:05:37.992 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:37.992 Compiler for C supports arguments -Wformat-security: NO 00:05:37.992 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:37.992 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:37.992 Compiler for C supports arguments -Wnested-externs: YES 00:05:37.992 Compiler for C supports arguments -Wold-style-definition: YES 00:05:37.992 Compiler for C supports arguments -Wpointer-arith: YES 00:05:37.992 Compiler for C supports arguments -Wsign-compare: YES 00:05:37.992 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:37.992 Compiler for C supports arguments -Wundef: YES 00:05:37.992 Compiler for C supports arguments -Wwrite-strings: YES 00:05:37.992 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:37.992 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:37.992 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:37.992 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:37.992 Program objdump found: YES (/usr/bin/objdump) 00:05:37.992 Compiler for C supports arguments -mavx512f: YES 00:05:37.992 Checking if "AVX512 checking" compiles: YES 00:05:37.992 Fetching value of define "__SSE4_2__" : 1 00:05:37.992 Fetching value of define "__AES__" : 1 00:05:37.992 Fetching value of define "__AVX__" : 1 00:05:37.992 Fetching value of define "__AVX2__" : 1 00:05:37.992 Fetching value of define "__AVX512BW__" : 1 00:05:37.992 Fetching value of define "__AVX512CD__" : 1 00:05:37.992 Fetching value of define "__AVX512DQ__" : 1 00:05:37.992 Fetching value of define "__AVX512F__" : 1 00:05:37.992 Fetching value of define "__AVX512VL__" : 1 00:05:37.992 Fetching value of define "__PCLMUL__" : 1 00:05:37.992 Fetching value of define "__RDRND__" : 1 00:05:37.992 Fetching value of define "__RDSEED__" : 1 00:05:37.992 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:37.992 Fetching value of define "__znver1__" : (undefined) 00:05:37.992 Fetching value of define "__znver2__" : (undefined) 00:05:37.992 Fetching value of define "__znver3__" : (undefined) 00:05:37.992 Fetching value of define "__znver4__" : (undefined) 00:05:37.992 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:37.992 Message: lib/log: Defining dependency "log" 00:05:37.992 Message: lib/kvargs: Defining dependency "kvargs" 00:05:37.992 Message: lib/telemetry: Defining dependency "telemetry" 00:05:37.992 Checking for function "getentropy" : NO 00:05:37.992 Message: lib/eal: Defining dependency "eal" 00:05:37.992 Message: lib/ring: Defining dependency "ring" 00:05:37.992 Message: lib/rcu: Defining dependency "rcu" 00:05:37.992 Message: lib/mempool: Defining dependency "mempool" 00:05:37.992 Message: lib/mbuf: Defining dependency "mbuf" 00:05:37.992 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:37.992 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:37.992 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:37.992 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:37.992 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:37.992 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:05:37.992 Compiler for C supports arguments -mpclmul: YES 00:05:37.992 Compiler for C supports arguments -maes: YES 00:05:37.992 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:37.992 Compiler for C supports arguments -mavx512bw: YES 00:05:37.992 Compiler for C supports arguments -mavx512dq: YES 00:05:37.992 Compiler for C supports arguments -mavx512vl: YES 00:05:37.992 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:37.992 Compiler for C supports arguments -mavx2: YES 00:05:37.992 Compiler for C supports arguments -mavx: YES 00:05:37.992 Message: lib/net: Defining dependency "net" 00:05:37.992 Message: lib/meter: Defining dependency "meter" 00:05:37.992 Message: lib/ethdev: Defining dependency "ethdev" 00:05:37.992 Message: lib/pci: Defining dependency "pci" 00:05:37.992 Message: lib/cmdline: Defining dependency "cmdline" 00:05:37.992 Message: lib/hash: Defining dependency "hash" 00:05:37.992 Message: lib/timer: Defining dependency "timer" 00:05:37.992 Message: lib/compressdev: Defining dependency "compressdev" 00:05:37.992 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:37.992 Message: lib/dmadev: Defining dependency "dmadev" 00:05:37.992 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:37.992 Message: lib/power: Defining dependency "power" 00:05:37.992 Message: lib/reorder: Defining dependency "reorder" 00:05:37.992 Message: lib/security: Defining dependency "security" 00:05:37.992 Has header "linux/userfaultfd.h" : YES 00:05:37.992 Has header "linux/vduse.h" : YES 00:05:37.992 Message: lib/vhost: Defining dependency "vhost" 00:05:37.992 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:37.992 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:37.992 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:37.992 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:37.992 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:37.992 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:37.992 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:37.992 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:37.992 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:37.992 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:37.992 Program doxygen found: YES (/usr/bin/doxygen) 00:05:37.992 Configuring doxy-api-html.conf using configuration 00:05:37.992 Configuring doxy-api-man.conf using configuration 00:05:37.992 Program mandb found: YES (/usr/bin/mandb) 00:05:37.992 Program sphinx-build found: NO 00:05:37.992 Configuring rte_build_config.h using configuration 00:05:37.992 Message: 00:05:37.992 ================= 00:05:37.992 Applications Enabled 00:05:37.992 ================= 00:05:37.992 00:05:37.992 apps: 00:05:37.992 00:05:37.992 00:05:37.992 Message: 00:05:37.992 ================= 00:05:37.992 Libraries Enabled 00:05:37.992 ================= 00:05:37.992 00:05:37.992 libs: 00:05:37.992 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:37.992 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:37.992 cryptodev, dmadev, power, reorder, security, vhost, 00:05:37.992 00:05:37.992 Message: 00:05:37.992 =============== 00:05:37.992 Drivers Enabled 00:05:37.992 =============== 00:05:37.992 00:05:37.992 common: 00:05:37.992 00:05:37.992 bus: 00:05:37.992 pci, vdev, 00:05:37.992 mempool: 00:05:37.992 ring, 00:05:37.992 dma: 00:05:37.992 00:05:37.992 net: 00:05:37.992 00:05:37.992 crypto: 00:05:37.992 00:05:37.992 compress: 00:05:37.992 00:05:37.992 vdpa: 00:05:37.992 00:05:37.992 00:05:37.992 Message: 00:05:37.992 ================= 00:05:37.992 Content Skipped 00:05:37.992 ================= 00:05:37.992 00:05:37.992 apps: 00:05:37.993 dumpcap: explicitly disabled via build config 00:05:37.993 graph: explicitly disabled via build config 00:05:37.993 pdump: explicitly disabled via build config 00:05:37.993 proc-info: explicitly disabled via build config 00:05:37.993 test-acl: explicitly disabled via build config 00:05:37.993 test-bbdev: explicitly disabled via build config 00:05:37.993 test-cmdline: explicitly disabled via build config 00:05:37.993 test-compress-perf: explicitly disabled via build config 00:05:37.993 test-crypto-perf: explicitly disabled via build config 00:05:37.993 test-dma-perf: explicitly disabled via build config 00:05:37.993 test-eventdev: explicitly disabled via build config 00:05:37.993 test-fib: explicitly disabled via build config 00:05:37.993 test-flow-perf: explicitly disabled via build config 00:05:37.993 test-gpudev: explicitly disabled via build config 00:05:37.993 test-mldev: explicitly disabled via build config 00:05:37.993 test-pipeline: explicitly disabled via build config 00:05:37.993 test-pmd: explicitly disabled via build config 00:05:37.993 test-regex: explicitly disabled via build config 00:05:37.993 test-sad: explicitly disabled via build config 00:05:37.993 test-security-perf: explicitly disabled via build config 00:05:37.993 00:05:37.993 libs: 00:05:37.993 metrics: explicitly disabled via build config 00:05:37.993 acl: explicitly disabled via build config 00:05:37.993 bbdev: explicitly disabled via build config 00:05:37.993 bitratestats: explicitly disabled via build config 00:05:37.993 bpf: explicitly disabled via build config 00:05:37.993 cfgfile: explicitly disabled via build config 00:05:37.993 distributor: explicitly disabled via build config 00:05:37.993 efd: explicitly disabled via build config 00:05:37.993 eventdev: explicitly disabled via build config 00:05:37.993 dispatcher: explicitly disabled via build config 00:05:37.993 gpudev: explicitly disabled via build config 00:05:37.993 gro: explicitly disabled via build config 00:05:37.993 gso: explicitly disabled via build config 00:05:37.993 ip_frag: explicitly disabled via build config 00:05:37.993 jobstats: explicitly disabled via build config 00:05:37.993 latencystats: explicitly disabled via build config 00:05:37.993 lpm: explicitly disabled via build config 00:05:37.993 member: explicitly disabled via build config 00:05:37.993 pcapng: explicitly disabled via build config 00:05:37.993 rawdev: explicitly disabled via build config 00:05:37.993 regexdev: explicitly disabled via build config 00:05:37.993 mldev: explicitly disabled via build config 00:05:37.993 rib: explicitly disabled via build config 00:05:37.993 sched: explicitly disabled via build config 00:05:37.993 stack: explicitly disabled via build config 00:05:37.993 ipsec: explicitly disabled via build config 00:05:37.993 pdcp: explicitly disabled via build config 00:05:37.993 fib: explicitly disabled via build config 00:05:37.993 port: explicitly disabled via build config 00:05:37.993 pdump: explicitly disabled via build config 00:05:37.993 table: explicitly disabled via build config 00:05:37.993 pipeline: explicitly disabled via build config 00:05:37.993 graph: explicitly disabled via build config 00:05:37.993 node: explicitly disabled via build config 00:05:37.993 00:05:37.993 drivers: 00:05:37.993 common/cpt: not in enabled drivers build config 00:05:37.993 common/dpaax: not in enabled drivers build config 00:05:37.993 common/iavf: not in enabled drivers build config 00:05:37.993 common/idpf: not in enabled drivers build config 00:05:37.993 common/mvep: not in enabled drivers build config 00:05:37.993 common/octeontx: not in enabled drivers build config 00:05:37.993 bus/auxiliary: not in enabled drivers build config 00:05:37.993 bus/cdx: not in enabled drivers build config 00:05:37.993 bus/dpaa: not in enabled drivers build config 00:05:37.993 bus/fslmc: not in enabled drivers build config 00:05:37.993 bus/ifpga: not in enabled drivers build config 00:05:37.993 bus/platform: not in enabled drivers build config 00:05:37.993 bus/vmbus: not in enabled drivers build config 00:05:37.993 common/cnxk: not in enabled drivers build config 00:05:37.993 common/mlx5: not in enabled drivers build config 00:05:37.993 common/nfp: not in enabled drivers build config 00:05:37.993 common/qat: not in enabled drivers build config 00:05:37.993 common/sfc_efx: not in enabled drivers build config 00:05:37.993 mempool/bucket: not in enabled drivers build config 00:05:37.993 mempool/cnxk: not in enabled drivers build config 00:05:37.993 mempool/dpaa: not in enabled drivers build config 00:05:37.993 mempool/dpaa2: not in enabled drivers build config 00:05:37.993 mempool/octeontx: not in enabled drivers build config 00:05:37.993 mempool/stack: not in enabled drivers build config 00:05:37.993 dma/cnxk: not in enabled drivers build config 00:05:37.993 dma/dpaa: not in enabled drivers build config 00:05:37.993 dma/dpaa2: not in enabled drivers build config 00:05:37.993 dma/hisilicon: not in enabled drivers build config 00:05:37.993 dma/idxd: not in enabled drivers build config 00:05:37.993 dma/ioat: not in enabled drivers build config 00:05:37.993 dma/skeleton: not in enabled drivers build config 00:05:37.993 net/af_packet: not in enabled drivers build config 00:05:37.993 net/af_xdp: not in enabled drivers build config 00:05:37.993 net/ark: not in enabled drivers build config 00:05:37.993 net/atlantic: not in enabled drivers build config 00:05:37.993 net/avp: not in enabled drivers build config 00:05:37.993 net/axgbe: not in enabled drivers build config 00:05:37.993 net/bnx2x: not in enabled drivers build config 00:05:37.993 net/bnxt: not in enabled drivers build config 00:05:37.993 net/bonding: not in enabled drivers build config 00:05:37.993 net/cnxk: not in enabled drivers build config 00:05:37.993 net/cpfl: not in enabled drivers build config 00:05:37.993 net/cxgbe: not in enabled drivers build config 00:05:37.993 net/dpaa: not in enabled drivers build config 00:05:37.993 net/dpaa2: not in enabled drivers build config 00:05:37.993 net/e1000: not in enabled drivers build config 00:05:37.993 net/ena: not in enabled drivers build config 00:05:37.993 net/enetc: not in enabled drivers build config 00:05:37.993 net/enetfec: not in enabled drivers build config 00:05:37.993 net/enic: not in enabled drivers build config 00:05:37.993 net/failsafe: not in enabled drivers build config 00:05:37.993 net/fm10k: not in enabled drivers build config 00:05:37.993 net/gve: not in enabled drivers build config 00:05:37.993 net/hinic: not in enabled drivers build config 00:05:37.993 net/hns3: not in enabled drivers build config 00:05:37.993 net/i40e: not in enabled drivers build config 00:05:37.993 net/iavf: not in enabled drivers build config 00:05:37.993 net/ice: not in enabled drivers build config 00:05:37.993 net/idpf: not in enabled drivers build config 00:05:37.993 net/igc: not in enabled drivers build config 00:05:37.993 net/ionic: not in enabled drivers build config 00:05:37.993 net/ipn3ke: not in enabled drivers build config 00:05:37.993 net/ixgbe: not in enabled drivers build config 00:05:37.993 net/mana: not in enabled drivers build config 00:05:37.993 net/memif: not in enabled drivers build config 00:05:37.993 net/mlx4: not in enabled drivers build config 00:05:37.993 net/mlx5: not in enabled drivers build config 00:05:37.993 net/mvneta: not in enabled drivers build config 00:05:37.993 net/mvpp2: not in enabled drivers build config 00:05:37.993 net/netvsc: not in enabled drivers build config 00:05:37.993 net/nfb: not in enabled drivers build config 00:05:37.993 net/nfp: not in enabled drivers build config 00:05:37.993 net/ngbe: not in enabled drivers build config 00:05:37.993 net/null: not in enabled drivers build config 00:05:37.993 net/octeontx: not in enabled drivers build config 00:05:37.993 net/octeon_ep: not in enabled drivers build config 00:05:37.993 net/pcap: not in enabled drivers build config 00:05:37.993 net/pfe: not in enabled drivers build config 00:05:37.993 net/qede: not in enabled drivers build config 00:05:37.993 net/ring: not in enabled drivers build config 00:05:37.993 net/sfc: not in enabled drivers build config 00:05:37.993 net/softnic: not in enabled drivers build config 00:05:37.993 net/tap: not in enabled drivers build config 00:05:37.993 net/thunderx: not in enabled drivers build config 00:05:37.993 net/txgbe: not in enabled drivers build config 00:05:37.993 net/vdev_netvsc: not in enabled drivers build config 00:05:37.993 net/vhost: not in enabled drivers build config 00:05:37.993 net/virtio: not in enabled drivers build config 00:05:37.993 net/vmxnet3: not in enabled drivers build config 00:05:37.993 raw/*: missing internal dependency, "rawdev" 00:05:37.993 crypto/armv8: not in enabled drivers build config 00:05:37.993 crypto/bcmfs: not in enabled drivers build config 00:05:37.993 crypto/caam_jr: not in enabled drivers build config 00:05:37.993 crypto/ccp: not in enabled drivers build config 00:05:37.993 crypto/cnxk: not in enabled drivers build config 00:05:37.993 crypto/dpaa_sec: not in enabled drivers build config 00:05:37.993 crypto/dpaa2_sec: not in enabled drivers build config 00:05:37.993 crypto/ipsec_mb: not in enabled drivers build config 00:05:37.993 crypto/mlx5: not in enabled drivers build config 00:05:37.993 crypto/mvsam: not in enabled drivers build config 00:05:37.993 crypto/nitrox: not in enabled drivers build config 00:05:37.993 crypto/null: not in enabled drivers build config 00:05:37.993 crypto/octeontx: not in enabled drivers build config 00:05:37.993 crypto/openssl: not in enabled drivers build config 00:05:37.993 crypto/scheduler: not in enabled drivers build config 00:05:37.993 crypto/uadk: not in enabled drivers build config 00:05:37.993 crypto/virtio: not in enabled drivers build config 00:05:37.993 compress/isal: not in enabled drivers build config 00:05:37.993 compress/mlx5: not in enabled drivers build config 00:05:37.993 compress/octeontx: not in enabled drivers build config 00:05:37.994 compress/zlib: not in enabled drivers build config 00:05:37.994 regex/*: missing internal dependency, "regexdev" 00:05:37.994 ml/*: missing internal dependency, "mldev" 00:05:37.994 vdpa/ifc: not in enabled drivers build config 00:05:37.994 vdpa/mlx5: not in enabled drivers build config 00:05:37.994 vdpa/nfp: not in enabled drivers build config 00:05:37.994 vdpa/sfc: not in enabled drivers build config 00:05:37.994 event/*: missing internal dependency, "eventdev" 00:05:37.994 baseband/*: missing internal dependency, "bbdev" 00:05:37.994 gpu/*: missing internal dependency, "gpudev" 00:05:37.994 00:05:37.994 00:05:37.994 Build targets in project: 85 00:05:37.994 00:05:37.994 DPDK 23.11.0 00:05:37.994 00:05:37.994 User defined options 00:05:37.994 buildtype : debug 00:05:37.994 default_library : shared 00:05:37.994 libdir : lib 00:05:37.994 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:37.994 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:37.994 c_link_args : 00:05:37.994 cpu_instruction_set: native 00:05:37.994 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:05:37.994 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:05:37.994 enable_docs : false 00:05:37.994 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:37.994 enable_kmods : false 00:05:37.994 tests : false 00:05:37.994 00:05:37.994 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:37.994 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:05:37.994 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:37.994 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:37.994 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:37.994 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:37.994 [5/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:37.994 [6/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:37.994 [7/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:37.994 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:37.994 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:37.994 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:37.994 [11/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:37.994 [12/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:37.994 [13/265] Linking static target lib/librte_kvargs.a 00:05:37.994 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:37.994 [15/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:37.994 [16/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:37.994 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:37.994 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:37.994 [19/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:37.994 [20/265] Linking static target lib/librte_log.a 00:05:37.994 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:37.994 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:38.258 [23/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:38.258 [24/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:38.258 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:38.258 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:38.258 [27/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:38.258 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:38.258 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:38.258 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:38.259 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:38.259 [32/265] Linking static target lib/librte_pci.a 00:05:38.259 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:38.259 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:38.259 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:38.259 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:38.259 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:38.524 [38/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:38.524 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:38.524 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:38.524 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:38.524 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:38.524 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:38.524 [44/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:38.524 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:38.524 [46/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:38.524 [47/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:38.524 [48/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:38.524 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:38.524 [50/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:38.524 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:38.524 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:38.524 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:38.524 [54/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:38.524 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:38.524 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:38.524 [57/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:38.524 [58/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:38.524 [59/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:38.524 [60/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:38.524 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:38.524 [62/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:38.524 [63/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:38.524 [64/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:38.524 [65/265] Linking static target lib/librte_ring.a 00:05:38.524 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:38.524 [67/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:38.524 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:38.524 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:38.524 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:38.524 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:38.524 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:38.524 [73/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:38.524 [74/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:38.524 [75/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:38.524 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:38.524 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:38.524 [78/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:38.524 [79/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:38.524 [80/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:38.524 [81/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:38.524 [82/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:38.524 [83/265] Linking static target lib/librte_meter.a 00:05:38.524 [84/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:38.524 [85/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:38.524 [86/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:38.524 [87/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:38.524 [88/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:38.524 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:38.524 [90/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:38.524 [91/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:38.524 [92/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:38.524 [93/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:38.524 [94/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:38.524 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:38.524 [96/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:38.524 [97/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:38.524 [98/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:38.524 [99/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:38.524 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:38.524 [101/265] Linking static target lib/librte_telemetry.a 00:05:38.524 [102/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:38.524 [103/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:38.524 [104/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:38.524 [105/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:38.524 [106/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:38.783 [107/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:38.783 [108/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:38.783 [109/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:38.783 [110/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:38.783 [111/265] Linking static target lib/librte_net.a 00:05:38.783 [112/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:38.783 [113/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:38.783 [114/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:38.783 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:38.783 [116/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:38.783 [117/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:38.783 [118/265] Linking static target lib/librte_rcu.a 00:05:38.783 [119/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:38.783 [120/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:38.783 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:38.783 [122/265] Linking static target lib/librte_cmdline.a 00:05:38.783 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:38.783 [124/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:38.783 [125/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:38.783 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:38.783 [127/265] Linking static target lib/librte_eal.a 00:05:38.783 [128/265] Linking static target lib/librte_mempool.a 00:05:38.783 [129/265] Linking static target lib/librte_timer.a 00:05:38.783 [130/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:38.783 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:38.783 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:38.783 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:38.783 [134/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:38.783 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:38.783 [136/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:38.783 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:38.783 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:38.783 [139/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:38.783 [140/265] Linking target lib/librte_log.so.24.0 00:05:38.783 [141/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:38.783 [142/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:38.783 [143/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:38.783 [144/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:38.783 [145/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:38.783 [146/265] Linking static target lib/librte_mbuf.a 00:05:38.783 [147/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:38.783 [148/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:38.783 [149/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:38.783 [150/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:38.783 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:38.783 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:38.783 [153/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:39.042 [154/265] Linking static target lib/librte_compressdev.a 00:05:39.042 [155/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:05:39.042 [156/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:39.042 [157/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:39.042 [158/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:39.042 [159/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:39.042 [160/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:39.042 [161/265] Linking target lib/librte_kvargs.so.24.0 00:05:39.042 [162/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:39.042 [163/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:39.042 [164/265] Linking static target lib/librte_dmadev.a 00:05:39.042 [165/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.042 [166/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:39.042 [167/265] Linking static target lib/librte_reorder.a 00:05:39.042 [168/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:39.042 [169/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:39.042 [170/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:39.042 [171/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:39.042 [172/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:39.042 [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:39.042 [174/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:39.042 [175/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:39.042 [176/265] Linking static target lib/librte_power.a 00:05:39.042 [177/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:39.042 [178/265] Linking static target lib/librte_security.a 00:05:39.042 [179/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:39.042 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:39.042 [181/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.042 [182/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:39.042 [183/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:39.043 [184/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.043 [185/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:05:39.043 [186/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:39.043 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:39.043 [188/265] Linking target lib/librte_telemetry.so.24.0 00:05:39.043 [189/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:39.302 [190/265] Linking static target lib/librte_hash.a 00:05:39.302 [191/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:39.302 [192/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:39.302 [193/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:39.302 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:39.302 [195/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:05:39.302 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:39.302 [197/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:39.302 [198/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:39.302 [199/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:39.302 [200/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:39.302 [201/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:39.302 [202/265] Linking static target drivers/librte_bus_vdev.a 00:05:39.302 [203/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:39.302 [204/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.302 [205/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:39.302 [206/265] Linking static target drivers/librte_bus_pci.a 00:05:39.302 [207/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:39.302 [208/265] Linking static target lib/librte_cryptodev.a 00:05:39.302 [209/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:39.302 [210/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:39.560 [211/265] Linking static target drivers/librte_mempool_ring.a 00:05:39.560 [212/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.560 [213/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.560 [214/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.560 [215/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.560 [216/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.819 [217/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.819 [218/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:39.819 [219/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.819 [220/265] Linking static target lib/librte_ethdev.a 00:05:39.819 [221/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:39.819 [222/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.819 [223/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.080 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:41.019 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:41.019 [226/265] Linking static target lib/librte_vhost.a 00:05:41.280 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:42.663 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.950 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.891 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.891 [231/265] Linking target lib/librte_eal.so.24.0 00:05:48.891 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:05:49.150 [233/265] Linking target lib/librte_meter.so.24.0 00:05:49.150 [234/265] Linking target lib/librte_timer.so.24.0 00:05:49.150 [235/265] Linking target lib/librte_ring.so.24.0 00:05:49.151 [236/265] Linking target lib/librte_pci.so.24.0 00:05:49.151 [237/265] Linking target lib/librte_dmadev.so.24.0 00:05:49.151 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:05:49.151 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:05:49.151 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:05:49.151 [241/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:05:49.151 [242/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:05:49.151 [243/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:05:49.151 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:05:49.151 [245/265] Linking target lib/librte_mempool.so.24.0 00:05:49.151 [246/265] Linking target lib/librte_rcu.so.24.0 00:05:49.411 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:05:49.411 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:05:49.411 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:05:49.411 [250/265] Linking target lib/librte_mbuf.so.24.0 00:05:49.411 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:05:49.670 [252/265] Linking target lib/librte_compressdev.so.24.0 00:05:49.670 [253/265] Linking target lib/librte_reorder.so.24.0 00:05:49.670 [254/265] Linking target lib/librte_net.so.24.0 00:05:49.670 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:05:49.670 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:05:49.670 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:05:49.670 [258/265] Linking target lib/librte_hash.so.24.0 00:05:49.670 [259/265] Linking target lib/librte_cmdline.so.24.0 00:05:49.670 [260/265] Linking target lib/librte_security.so.24.0 00:05:49.670 [261/265] Linking target lib/librte_ethdev.so.24.0 00:05:49.930 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:05:49.930 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:05:49.930 [264/265] Linking target lib/librte_power.so.24.0 00:05:49.930 [265/265] Linking target lib/librte_vhost.so.24.0 00:05:49.930 INFO: autodetecting backend as ninja 00:05:49.930 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:05:50.871 CC lib/ut_mock/mock.o 00:05:50.871 CC lib/log/log.o 00:05:50.871 CC lib/log/log_flags.o 00:05:50.871 CC lib/log/log_deprecated.o 00:05:50.871 CC lib/ut/ut.o 00:05:51.132 LIB libspdk_ut_mock.a 00:05:51.132 LIB libspdk_log.a 00:05:51.132 LIB libspdk_ut.a 00:05:51.132 SO libspdk_ut_mock.so.6.0 00:05:51.132 SO libspdk_ut.so.2.0 00:05:51.132 SO libspdk_log.so.7.0 00:05:51.132 SYMLINK libspdk_ut_mock.so 00:05:51.132 SYMLINK libspdk_ut.so 00:05:51.132 SYMLINK libspdk_log.so 00:05:51.702 CC lib/util/bit_array.o 00:05:51.702 CC lib/util/base64.o 00:05:51.702 CC lib/util/cpuset.o 00:05:51.702 CC lib/util/crc32.o 00:05:51.702 CC lib/util/crc16.o 00:05:51.702 CC lib/util/crc32_ieee.o 00:05:51.702 CC lib/util/crc32c.o 00:05:51.702 CC lib/util/crc64.o 00:05:51.702 CC lib/util/dif.o 00:05:51.702 CC lib/util/fd.o 00:05:51.702 CC lib/util/hexlify.o 00:05:51.702 CC lib/util/file.o 00:05:51.702 CXX lib/trace_parser/trace.o 00:05:51.702 CC lib/util/iov.o 00:05:51.702 CC lib/util/math.o 00:05:51.702 CC lib/dma/dma.o 00:05:51.702 CC lib/util/pipe.o 00:05:51.702 CC lib/util/strerror_tls.o 00:05:51.702 CC lib/ioat/ioat.o 00:05:51.702 CC lib/util/string.o 00:05:51.702 CC lib/util/uuid.o 00:05:51.702 CC lib/util/fd_group.o 00:05:51.702 CC lib/util/xor.o 00:05:51.702 CC lib/util/zipf.o 00:05:51.702 CC lib/vfio_user/host/vfio_user_pci.o 00:05:51.702 CC lib/vfio_user/host/vfio_user.o 00:05:51.702 LIB libspdk_dma.a 00:05:51.702 SO libspdk_dma.so.4.0 00:05:51.702 LIB libspdk_ioat.a 00:05:51.961 SYMLINK libspdk_dma.so 00:05:51.961 SO libspdk_ioat.so.7.0 00:05:51.961 SYMLINK libspdk_ioat.so 00:05:51.961 LIB libspdk_vfio_user.a 00:05:51.961 SO libspdk_vfio_user.so.5.0 00:05:51.961 LIB libspdk_util.a 00:05:51.961 SYMLINK libspdk_vfio_user.so 00:05:51.961 SO libspdk_util.so.9.0 00:05:52.220 SYMLINK libspdk_util.so 00:05:52.479 CC lib/json/json_parse.o 00:05:52.479 CC lib/json/json_util.o 00:05:52.479 CC lib/json/json_write.o 00:05:52.479 CC lib/vmd/vmd.o 00:05:52.479 CC lib/conf/conf.o 00:05:52.479 CC lib/vmd/led.o 00:05:52.479 CC lib/rdma/common.o 00:05:52.479 CC lib/rdma/rdma_verbs.o 00:05:52.479 CC lib/idxd/idxd.o 00:05:52.479 CC lib/idxd/idxd_user.o 00:05:52.479 CC lib/env_dpdk/env.o 00:05:52.479 CC lib/env_dpdk/memory.o 00:05:52.479 CC lib/env_dpdk/pci.o 00:05:52.479 CC lib/env_dpdk/init.o 00:05:52.479 CC lib/env_dpdk/threads.o 00:05:52.479 CC lib/env_dpdk/pci_ioat.o 00:05:52.479 CC lib/env_dpdk/pci_virtio.o 00:05:52.479 CC lib/env_dpdk/pci_vmd.o 00:05:52.479 CC lib/env_dpdk/pci_idxd.o 00:05:52.479 CC lib/env_dpdk/pci_event.o 00:05:52.479 CC lib/env_dpdk/sigbus_handler.o 00:05:52.479 CC lib/env_dpdk/pci_dpdk.o 00:05:52.479 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:52.479 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:52.739 LIB libspdk_conf.a 00:05:52.739 LIB libspdk_json.a 00:05:52.739 LIB libspdk_rdma.a 00:05:52.739 SO libspdk_conf.so.6.0 00:05:52.739 SO libspdk_json.so.6.0 00:05:52.739 SO libspdk_rdma.so.6.0 00:05:52.739 SYMLINK libspdk_conf.so 00:05:52.739 SYMLINK libspdk_json.so 00:05:52.739 SYMLINK libspdk_rdma.so 00:05:52.998 LIB libspdk_idxd.a 00:05:52.998 SO libspdk_idxd.so.12.0 00:05:52.998 LIB libspdk_vmd.a 00:05:52.998 SO libspdk_vmd.so.6.0 00:05:52.998 SYMLINK libspdk_idxd.so 00:05:52.998 SYMLINK libspdk_vmd.so 00:05:52.998 LIB libspdk_trace_parser.a 00:05:52.998 CC lib/jsonrpc/jsonrpc_server.o 00:05:52.998 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:52.998 CC lib/jsonrpc/jsonrpc_client.o 00:05:52.998 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:52.998 SO libspdk_trace_parser.so.5.0 00:05:53.258 SYMLINK libspdk_trace_parser.so 00:05:53.258 LIB libspdk_jsonrpc.a 00:05:53.258 SO libspdk_jsonrpc.so.6.0 00:05:53.258 SYMLINK libspdk_jsonrpc.so 00:05:53.518 LIB libspdk_env_dpdk.a 00:05:53.518 SO libspdk_env_dpdk.so.14.0 00:05:53.778 CC lib/rpc/rpc.o 00:05:53.778 SYMLINK libspdk_env_dpdk.so 00:05:53.778 LIB libspdk_rpc.a 00:05:53.778 SO libspdk_rpc.so.6.0 00:05:54.039 SYMLINK libspdk_rpc.so 00:05:54.300 CC lib/keyring/keyring.o 00:05:54.300 CC lib/keyring/keyring_rpc.o 00:05:54.300 CC lib/trace/trace.o 00:05:54.300 CC lib/trace/trace_flags.o 00:05:54.300 CC lib/trace/trace_rpc.o 00:05:54.300 CC lib/notify/notify.o 00:05:54.300 CC lib/notify/notify_rpc.o 00:05:54.300 LIB libspdk_notify.a 00:05:54.561 LIB libspdk_keyring.a 00:05:54.561 SO libspdk_notify.so.6.0 00:05:54.561 LIB libspdk_trace.a 00:05:54.561 SO libspdk_keyring.so.1.0 00:05:54.561 SYMLINK libspdk_notify.so 00:05:54.561 SO libspdk_trace.so.10.0 00:05:54.561 SYMLINK libspdk_keyring.so 00:05:54.561 SYMLINK libspdk_trace.so 00:05:54.822 CC lib/sock/sock.o 00:05:54.822 CC lib/sock/sock_rpc.o 00:05:54.822 CC lib/thread/thread.o 00:05:54.822 CC lib/thread/iobuf.o 00:05:55.083 LIB libspdk_sock.a 00:05:55.343 SO libspdk_sock.so.9.0 00:05:55.343 SYMLINK libspdk_sock.so 00:05:55.603 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:55.603 CC lib/nvme/nvme_ctrlr.o 00:05:55.603 CC lib/nvme/nvme_fabric.o 00:05:55.603 CC lib/nvme/nvme_ns_cmd.o 00:05:55.603 CC lib/nvme/nvme_ns.o 00:05:55.603 CC lib/nvme/nvme_pcie_common.o 00:05:55.603 CC lib/nvme/nvme_pcie.o 00:05:55.603 CC lib/nvme/nvme_qpair.o 00:05:55.603 CC lib/nvme/nvme.o 00:05:55.603 CC lib/nvme/nvme_quirks.o 00:05:55.603 CC lib/nvme/nvme_transport.o 00:05:55.603 CC lib/nvme/nvme_discovery.o 00:05:55.603 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:55.603 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:55.603 CC lib/nvme/nvme_tcp.o 00:05:55.603 CC lib/nvme/nvme_opal.o 00:05:55.603 CC lib/nvme/nvme_io_msg.o 00:05:55.603 CC lib/nvme/nvme_poll_group.o 00:05:55.603 CC lib/nvme/nvme_zns.o 00:05:55.603 CC lib/nvme/nvme_stubs.o 00:05:55.603 CC lib/nvme/nvme_auth.o 00:05:55.603 CC lib/nvme/nvme_cuse.o 00:05:55.603 CC lib/nvme/nvme_vfio_user.o 00:05:55.603 CC lib/nvme/nvme_rdma.o 00:05:55.864 LIB libspdk_thread.a 00:05:55.864 SO libspdk_thread.so.10.0 00:05:56.124 SYMLINK libspdk_thread.so 00:05:56.383 CC lib/virtio/virtio.o 00:05:56.383 CC lib/virtio/virtio_vhost_user.o 00:05:56.383 CC lib/virtio/virtio_pci.o 00:05:56.383 CC lib/virtio/virtio_vfio_user.o 00:05:56.383 CC lib/vfu_tgt/tgt_endpoint.o 00:05:56.383 CC lib/vfu_tgt/tgt_rpc.o 00:05:56.383 CC lib/init/json_config.o 00:05:56.383 CC lib/init/subsystem.o 00:05:56.383 CC lib/init/subsystem_rpc.o 00:05:56.383 CC lib/init/rpc.o 00:05:56.383 CC lib/blob/blobstore.o 00:05:56.383 CC lib/accel/accel.o 00:05:56.383 CC lib/accel/accel_rpc.o 00:05:56.383 CC lib/blob/request.o 00:05:56.383 CC lib/accel/accel_sw.o 00:05:56.383 CC lib/blob/zeroes.o 00:05:56.383 CC lib/blob/blob_bs_dev.o 00:05:56.642 LIB libspdk_init.a 00:05:56.642 SO libspdk_init.so.5.0 00:05:56.642 LIB libspdk_virtio.a 00:05:56.642 LIB libspdk_vfu_tgt.a 00:05:56.642 SO libspdk_virtio.so.7.0 00:05:56.642 SO libspdk_vfu_tgt.so.3.0 00:05:56.642 SYMLINK libspdk_init.so 00:05:56.642 SYMLINK libspdk_vfu_tgt.so 00:05:56.642 SYMLINK libspdk_virtio.so 00:05:56.902 CC lib/event/app.o 00:05:56.902 CC lib/event/log_rpc.o 00:05:56.902 CC lib/event/reactor.o 00:05:56.902 CC lib/event/app_rpc.o 00:05:56.902 CC lib/event/scheduler_static.o 00:05:57.161 LIB libspdk_accel.a 00:05:57.161 SO libspdk_accel.so.15.0 00:05:57.162 SYMLINK libspdk_accel.so 00:05:57.162 LIB libspdk_nvme.a 00:05:57.162 LIB libspdk_event.a 00:05:57.162 SO libspdk_nvme.so.13.0 00:05:57.421 SO libspdk_event.so.13.0 00:05:57.421 SYMLINK libspdk_event.so 00:05:57.421 CC lib/bdev/bdev_rpc.o 00:05:57.421 CC lib/bdev/bdev.o 00:05:57.421 CC lib/bdev/bdev_zone.o 00:05:57.421 CC lib/bdev/part.o 00:05:57.421 CC lib/bdev/scsi_nvme.o 00:05:57.421 SYMLINK libspdk_nvme.so 00:05:58.361 LIB libspdk_blob.a 00:05:58.361 SO libspdk_blob.so.11.0 00:05:58.361 SYMLINK libspdk_blob.so 00:05:58.931 CC lib/lvol/lvol.o 00:05:58.931 CC lib/blobfs/blobfs.o 00:05:58.931 CC lib/blobfs/tree.o 00:05:59.190 LIB libspdk_bdev.a 00:05:59.190 SO libspdk_bdev.so.15.0 00:05:59.450 LIB libspdk_blobfs.a 00:05:59.450 SYMLINK libspdk_bdev.so 00:05:59.450 SO libspdk_blobfs.so.10.0 00:05:59.450 LIB libspdk_lvol.a 00:05:59.450 SO libspdk_lvol.so.10.0 00:05:59.450 SYMLINK libspdk_blobfs.so 00:05:59.450 SYMLINK libspdk_lvol.so 00:05:59.710 CC lib/nbd/nbd.o 00:05:59.710 CC lib/nvmf/ctrlr.o 00:05:59.710 CC lib/nvmf/ctrlr_discovery.o 00:05:59.710 CC lib/nbd/nbd_rpc.o 00:05:59.710 CC lib/scsi/dev.o 00:05:59.710 CC lib/ublk/ublk.o 00:05:59.710 CC lib/nvmf/ctrlr_bdev.o 00:05:59.710 CC lib/scsi/lun.o 00:05:59.710 CC lib/ublk/ublk_rpc.o 00:05:59.710 CC lib/nvmf/subsystem.o 00:05:59.710 CC lib/scsi/port.o 00:05:59.710 CC lib/nvmf/nvmf.o 00:05:59.710 CC lib/scsi/scsi.o 00:05:59.710 CC lib/nvmf/nvmf_rpc.o 00:05:59.710 CC lib/nvmf/transport.o 00:05:59.710 CC lib/scsi/scsi_bdev.o 00:05:59.710 CC lib/nvmf/tcp.o 00:05:59.710 CC lib/scsi/scsi_pr.o 00:05:59.710 CC lib/nvmf/stubs.o 00:05:59.710 CC lib/scsi/scsi_rpc.o 00:05:59.710 CC lib/nvmf/vfio_user.o 00:05:59.710 CC lib/ftl/ftl_init.o 00:05:59.710 CC lib/ftl/ftl_core.o 00:05:59.710 CC lib/scsi/task.o 00:05:59.710 CC lib/nvmf/rdma.o 00:05:59.710 CC lib/nvmf/auth.o 00:05:59.710 CC lib/ftl/ftl_layout.o 00:05:59.710 CC lib/ftl/ftl_debug.o 00:05:59.710 CC lib/ftl/ftl_io.o 00:05:59.710 CC lib/ftl/ftl_l2p.o 00:05:59.710 CC lib/ftl/ftl_sb.o 00:05:59.710 CC lib/ftl/ftl_l2p_flat.o 00:05:59.710 CC lib/ftl/ftl_band.o 00:05:59.710 CC lib/ftl/ftl_nv_cache.o 00:05:59.710 CC lib/ftl/ftl_band_ops.o 00:05:59.710 CC lib/ftl/ftl_writer.o 00:05:59.710 CC lib/ftl/ftl_rq.o 00:05:59.710 CC lib/ftl/ftl_l2p_cache.o 00:05:59.710 CC lib/ftl/ftl_reloc.o 00:05:59.710 CC lib/ftl/ftl_p2l.o 00:05:59.710 CC lib/ftl/mngt/ftl_mngt.o 00:05:59.710 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:59.710 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:59.710 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:59.710 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:59.710 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:59.710 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:59.710 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:59.710 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:59.710 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:59.710 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:59.710 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:59.710 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:59.710 CC lib/ftl/utils/ftl_conf.o 00:05:59.710 CC lib/ftl/utils/ftl_md.o 00:05:59.710 CC lib/ftl/utils/ftl_bitmap.o 00:05:59.710 CC lib/ftl/utils/ftl_property.o 00:05:59.710 CC lib/ftl/utils/ftl_mempool.o 00:05:59.710 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:59.710 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:59.710 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:59.710 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:59.710 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:59.710 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:59.710 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:59.710 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:59.710 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:59.710 CC lib/ftl/base/ftl_base_dev.o 00:05:59.710 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:59.710 CC lib/ftl/base/ftl_base_bdev.o 00:05:59.710 CC lib/ftl/ftl_trace.o 00:06:00.277 LIB libspdk_nbd.a 00:06:00.277 SO libspdk_nbd.so.7.0 00:06:00.277 LIB libspdk_scsi.a 00:06:00.277 SYMLINK libspdk_nbd.so 00:06:00.277 SO libspdk_scsi.so.9.0 00:06:00.277 SYMLINK libspdk_scsi.so 00:06:00.536 LIB libspdk_ublk.a 00:06:00.536 SO libspdk_ublk.so.3.0 00:06:00.536 SYMLINK libspdk_ublk.so 00:06:00.536 LIB libspdk_ftl.a 00:06:00.536 CC lib/vhost/vhost.o 00:06:00.536 CC lib/vhost/vhost_rpc.o 00:06:00.536 CC lib/vhost/vhost_blk.o 00:06:00.536 CC lib/vhost/vhost_scsi.o 00:06:00.536 CC lib/vhost/rte_vhost_user.o 00:06:00.536 CC lib/iscsi/conn.o 00:06:00.536 CC lib/iscsi/init_grp.o 00:06:00.536 CC lib/iscsi/iscsi.o 00:06:00.536 CC lib/iscsi/md5.o 00:06:00.536 CC lib/iscsi/param.o 00:06:00.536 CC lib/iscsi/portal_grp.o 00:06:00.536 CC lib/iscsi/tgt_node.o 00:06:00.536 CC lib/iscsi/iscsi_subsystem.o 00:06:00.536 CC lib/iscsi/iscsi_rpc.o 00:06:00.536 CC lib/iscsi/task.o 00:06:00.796 SO libspdk_ftl.so.9.0 00:06:01.055 SYMLINK libspdk_ftl.so 00:06:01.315 LIB libspdk_vhost.a 00:06:01.315 LIB libspdk_nvmf.a 00:06:01.574 SO libspdk_vhost.so.8.0 00:06:01.574 SO libspdk_nvmf.so.18.0 00:06:01.574 SYMLINK libspdk_vhost.so 00:06:01.574 LIB libspdk_iscsi.a 00:06:01.574 SYMLINK libspdk_nvmf.so 00:06:01.574 SO libspdk_iscsi.so.8.0 00:06:01.834 SYMLINK libspdk_iscsi.so 00:06:02.404 CC module/vfu_device/vfu_virtio_blk.o 00:06:02.404 CC module/vfu_device/vfu_virtio.o 00:06:02.404 CC module/vfu_device/vfu_virtio_rpc.o 00:06:02.404 CC module/vfu_device/vfu_virtio_scsi.o 00:06:02.404 CC module/env_dpdk/env_dpdk_rpc.o 00:06:02.404 CC module/accel/iaa/accel_iaa.o 00:06:02.404 CC module/accel/iaa/accel_iaa_rpc.o 00:06:02.404 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:02.404 CC module/accel/dsa/accel_dsa.o 00:06:02.404 CC module/accel/error/accel_error_rpc.o 00:06:02.404 CC module/accel/dsa/accel_dsa_rpc.o 00:06:02.404 CC module/sock/posix/posix.o 00:06:02.404 CC module/blob/bdev/blob_bdev.o 00:06:02.404 CC module/accel/ioat/accel_ioat.o 00:06:02.404 CC module/accel/ioat/accel_ioat_rpc.o 00:06:02.404 CC module/accel/error/accel_error.o 00:06:02.404 CC module/scheduler/gscheduler/gscheduler.o 00:06:02.404 CC module/keyring/file/keyring.o 00:06:02.404 LIB libspdk_env_dpdk_rpc.a 00:06:02.404 CC module/keyring/file/keyring_rpc.o 00:06:02.404 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:02.404 SO libspdk_env_dpdk_rpc.so.6.0 00:06:02.663 SYMLINK libspdk_env_dpdk_rpc.so 00:06:02.663 LIB libspdk_accel_iaa.a 00:06:02.663 LIB libspdk_keyring_file.a 00:06:02.663 LIB libspdk_scheduler_gscheduler.a 00:06:02.663 LIB libspdk_accel_error.a 00:06:02.663 LIB libspdk_scheduler_dpdk_governor.a 00:06:02.663 LIB libspdk_scheduler_dynamic.a 00:06:02.663 LIB libspdk_accel_ioat.a 00:06:02.663 SO libspdk_accel_iaa.so.3.0 00:06:02.663 SO libspdk_keyring_file.so.1.0 00:06:02.663 SO libspdk_scheduler_dynamic.so.4.0 00:06:02.663 SO libspdk_scheduler_gscheduler.so.4.0 00:06:02.663 SO libspdk_accel_error.so.2.0 00:06:02.663 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:02.663 LIB libspdk_accel_dsa.a 00:06:02.663 SO libspdk_accel_ioat.so.6.0 00:06:02.663 LIB libspdk_blob_bdev.a 00:06:02.663 SYMLINK libspdk_accel_iaa.so 00:06:02.663 SYMLINK libspdk_scheduler_dynamic.so 00:06:02.663 SO libspdk_accel_dsa.so.5.0 00:06:02.663 SYMLINK libspdk_keyring_file.so 00:06:02.663 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:02.663 SYMLINK libspdk_scheduler_gscheduler.so 00:06:02.663 SO libspdk_blob_bdev.so.11.0 00:06:02.663 SYMLINK libspdk_accel_error.so 00:06:02.663 SYMLINK libspdk_accel_ioat.so 00:06:02.663 SYMLINK libspdk_accel_dsa.so 00:06:02.663 SYMLINK libspdk_blob_bdev.so 00:06:02.663 LIB libspdk_vfu_device.a 00:06:02.923 SO libspdk_vfu_device.so.3.0 00:06:02.923 SYMLINK libspdk_vfu_device.so 00:06:02.923 LIB libspdk_sock_posix.a 00:06:02.923 SO libspdk_sock_posix.so.6.0 00:06:03.183 SYMLINK libspdk_sock_posix.so 00:06:03.183 CC module/blobfs/bdev/blobfs_bdev.o 00:06:03.183 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:03.183 CC module/bdev/delay/vbdev_delay.o 00:06:03.183 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:03.183 CC module/bdev/passthru/vbdev_passthru.o 00:06:03.183 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:03.183 CC module/bdev/gpt/gpt.o 00:06:03.183 CC module/bdev/gpt/vbdev_gpt.o 00:06:03.183 CC module/bdev/split/vbdev_split.o 00:06:03.183 CC module/bdev/split/vbdev_split_rpc.o 00:06:03.183 CC module/bdev/malloc/bdev_malloc.o 00:06:03.183 CC module/bdev/error/vbdev_error.o 00:06:03.183 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:03.183 CC module/bdev/error/vbdev_error_rpc.o 00:06:03.183 CC module/bdev/aio/bdev_aio.o 00:06:03.183 CC module/bdev/aio/bdev_aio_rpc.o 00:06:03.183 CC module/bdev/raid/bdev_raid.o 00:06:03.183 CC module/bdev/raid/bdev_raid_rpc.o 00:06:03.183 CC module/bdev/raid/bdev_raid_sb.o 00:06:03.183 CC module/bdev/raid/raid0.o 00:06:03.183 CC module/bdev/raid/raid1.o 00:06:03.183 CC module/bdev/raid/concat.o 00:06:03.183 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:03.183 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:03.183 CC module/bdev/null/bdev_null.o 00:06:03.183 CC module/bdev/nvme/bdev_nvme.o 00:06:03.183 CC module/bdev/null/bdev_null_rpc.o 00:06:03.183 CC module/bdev/lvol/vbdev_lvol.o 00:06:03.183 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:03.183 CC module/bdev/ftl/bdev_ftl.o 00:06:03.183 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:03.183 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:03.183 CC module/bdev/nvme/nvme_rpc.o 00:06:03.183 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:03.183 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:03.183 CC module/bdev/nvme/bdev_mdns_client.o 00:06:03.183 CC module/bdev/nvme/vbdev_opal.o 00:06:03.183 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:03.183 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:03.183 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:03.183 CC module/bdev/iscsi/bdev_iscsi.o 00:06:03.183 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:03.441 LIB libspdk_blobfs_bdev.a 00:06:03.441 LIB libspdk_bdev_split.a 00:06:03.441 SO libspdk_blobfs_bdev.so.6.0 00:06:03.441 SO libspdk_bdev_split.so.6.0 00:06:03.441 LIB libspdk_bdev_gpt.a 00:06:03.441 LIB libspdk_bdev_passthru.a 00:06:03.441 LIB libspdk_bdev_error.a 00:06:03.441 LIB libspdk_bdev_null.a 00:06:03.441 SO libspdk_bdev_gpt.so.6.0 00:06:03.441 SYMLINK libspdk_blobfs_bdev.so 00:06:03.441 SYMLINK libspdk_bdev_split.so 00:06:03.441 SO libspdk_bdev_error.so.6.0 00:06:03.441 SO libspdk_bdev_passthru.so.6.0 00:06:03.700 LIB libspdk_bdev_ftl.a 00:06:03.700 SO libspdk_bdev_null.so.6.0 00:06:03.700 LIB libspdk_bdev_delay.a 00:06:03.700 SYMLINK libspdk_bdev_gpt.so 00:06:03.700 SO libspdk_bdev_ftl.so.6.0 00:06:03.700 LIB libspdk_bdev_malloc.a 00:06:03.700 LIB libspdk_bdev_zone_block.a 00:06:03.700 LIB libspdk_bdev_aio.a 00:06:03.700 SO libspdk_bdev_delay.so.6.0 00:06:03.700 LIB libspdk_bdev_iscsi.a 00:06:03.700 SYMLINK libspdk_bdev_passthru.so 00:06:03.700 SYMLINK libspdk_bdev_error.so 00:06:03.700 SYMLINK libspdk_bdev_null.so 00:06:03.700 SO libspdk_bdev_zone_block.so.6.0 00:06:03.700 SO libspdk_bdev_aio.so.6.0 00:06:03.700 SO libspdk_bdev_malloc.so.6.0 00:06:03.700 SO libspdk_bdev_iscsi.so.6.0 00:06:03.700 SYMLINK libspdk_bdev_ftl.so 00:06:03.700 SYMLINK libspdk_bdev_delay.so 00:06:03.700 SYMLINK libspdk_bdev_aio.so 00:06:03.700 SYMLINK libspdk_bdev_iscsi.so 00:06:03.700 SYMLINK libspdk_bdev_malloc.so 00:06:03.700 SYMLINK libspdk_bdev_zone_block.so 00:06:03.700 LIB libspdk_bdev_lvol.a 00:06:03.700 LIB libspdk_bdev_virtio.a 00:06:03.700 SO libspdk_bdev_lvol.so.6.0 00:06:03.700 SO libspdk_bdev_virtio.so.6.0 00:06:03.959 SYMLINK libspdk_bdev_lvol.so 00:06:03.959 SYMLINK libspdk_bdev_virtio.so 00:06:03.959 LIB libspdk_bdev_raid.a 00:06:03.959 SO libspdk_bdev_raid.so.6.0 00:06:04.217 SYMLINK libspdk_bdev_raid.so 00:06:04.786 LIB libspdk_bdev_nvme.a 00:06:04.786 SO libspdk_bdev_nvme.so.7.0 00:06:05.046 SYMLINK libspdk_bdev_nvme.so 00:06:05.615 CC module/event/subsystems/iobuf/iobuf.o 00:06:05.615 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:05.615 CC module/event/subsystems/vmd/vmd.o 00:06:05.615 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:05.615 CC module/event/subsystems/keyring/keyring.o 00:06:05.615 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:06:05.615 CC module/event/subsystems/sock/sock.o 00:06:05.615 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:05.615 CC module/event/subsystems/scheduler/scheduler.o 00:06:05.615 LIB libspdk_event_sock.a 00:06:05.615 LIB libspdk_event_scheduler.a 00:06:05.615 LIB libspdk_event_vfu_tgt.a 00:06:05.615 LIB libspdk_event_keyring.a 00:06:05.874 LIB libspdk_event_vmd.a 00:06:05.874 LIB libspdk_event_iobuf.a 00:06:05.874 LIB libspdk_event_vhost_blk.a 00:06:05.874 SO libspdk_event_sock.so.5.0 00:06:05.874 SO libspdk_event_keyring.so.1.0 00:06:05.874 SO libspdk_event_scheduler.so.4.0 00:06:05.874 SO libspdk_event_vfu_tgt.so.3.0 00:06:05.874 SO libspdk_event_vhost_blk.so.3.0 00:06:05.874 SO libspdk_event_iobuf.so.3.0 00:06:05.874 SO libspdk_event_vmd.so.6.0 00:06:05.874 SYMLINK libspdk_event_sock.so 00:06:05.874 SYMLINK libspdk_event_scheduler.so 00:06:05.874 SYMLINK libspdk_event_vfu_tgt.so 00:06:05.874 SYMLINK libspdk_event_vhost_blk.so 00:06:05.874 SYMLINK libspdk_event_keyring.so 00:06:05.874 SYMLINK libspdk_event_iobuf.so 00:06:05.874 SYMLINK libspdk_event_vmd.so 00:06:06.134 CC module/event/subsystems/accel/accel.o 00:06:06.394 LIB libspdk_event_accel.a 00:06:06.394 SO libspdk_event_accel.so.6.0 00:06:06.394 SYMLINK libspdk_event_accel.so 00:06:06.653 CC module/event/subsystems/bdev/bdev.o 00:06:06.913 LIB libspdk_event_bdev.a 00:06:06.913 SO libspdk_event_bdev.so.6.0 00:06:06.913 SYMLINK libspdk_event_bdev.so 00:06:07.172 CC module/event/subsystems/scsi/scsi.o 00:06:07.172 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:07.172 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:07.172 CC module/event/subsystems/nbd/nbd.o 00:06:07.172 CC module/event/subsystems/ublk/ublk.o 00:06:07.431 LIB libspdk_event_scsi.a 00:06:07.431 LIB libspdk_event_nbd.a 00:06:07.431 LIB libspdk_event_ublk.a 00:06:07.431 SO libspdk_event_scsi.so.6.0 00:06:07.431 SO libspdk_event_nbd.so.6.0 00:06:07.431 SO libspdk_event_ublk.so.3.0 00:06:07.431 LIB libspdk_event_nvmf.a 00:06:07.431 SYMLINK libspdk_event_scsi.so 00:06:07.432 SYMLINK libspdk_event_nbd.so 00:06:07.432 SYMLINK libspdk_event_ublk.so 00:06:07.432 SO libspdk_event_nvmf.so.6.0 00:06:07.432 SYMLINK libspdk_event_nvmf.so 00:06:07.691 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:07.691 CC module/event/subsystems/iscsi/iscsi.o 00:06:07.951 LIB libspdk_event_vhost_scsi.a 00:06:07.951 LIB libspdk_event_iscsi.a 00:06:07.951 SO libspdk_event_vhost_scsi.so.3.0 00:06:07.951 SO libspdk_event_iscsi.so.6.0 00:06:07.951 SYMLINK libspdk_event_vhost_scsi.so 00:06:07.951 SYMLINK libspdk_event_iscsi.so 00:06:08.211 SO libspdk.so.6.0 00:06:08.211 SYMLINK libspdk.so 00:06:08.472 CXX app/trace/trace.o 00:06:08.472 CC app/spdk_nvme_identify/identify.o 00:06:08.472 CC app/trace_record/trace_record.o 00:06:08.472 CC app/spdk_lspci/spdk_lspci.o 00:06:08.472 CC app/spdk_nvme_discover/discovery_aer.o 00:06:08.472 TEST_HEADER include/spdk/accel.h 00:06:08.472 TEST_HEADER include/spdk/assert.h 00:06:08.472 CC app/spdk_nvme_perf/perf.o 00:06:08.472 TEST_HEADER include/spdk/accel_module.h 00:06:08.472 TEST_HEADER include/spdk/barrier.h 00:06:08.472 CC test/rpc_client/rpc_client_test.o 00:06:08.472 TEST_HEADER include/spdk/base64.h 00:06:08.472 TEST_HEADER include/spdk/bdev.h 00:06:08.472 TEST_HEADER include/spdk/bdev_zone.h 00:06:08.472 TEST_HEADER include/spdk/bdev_module.h 00:06:08.472 TEST_HEADER include/spdk/bit_array.h 00:06:08.472 TEST_HEADER include/spdk/blob_bdev.h 00:06:08.472 TEST_HEADER include/spdk/bit_pool.h 00:06:08.472 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:08.472 CC app/spdk_top/spdk_top.o 00:06:08.472 TEST_HEADER include/spdk/blobfs.h 00:06:08.472 TEST_HEADER include/spdk/blob.h 00:06:08.472 TEST_HEADER include/spdk/conf.h 00:06:08.472 TEST_HEADER include/spdk/config.h 00:06:08.472 TEST_HEADER include/spdk/cpuset.h 00:06:08.472 TEST_HEADER include/spdk/crc16.h 00:06:08.472 TEST_HEADER include/spdk/crc32.h 00:06:08.472 TEST_HEADER include/spdk/crc64.h 00:06:08.472 TEST_HEADER include/spdk/dif.h 00:06:08.472 TEST_HEADER include/spdk/dma.h 00:06:08.473 TEST_HEADER include/spdk/endian.h 00:06:08.473 TEST_HEADER include/spdk/env_dpdk.h 00:06:08.473 TEST_HEADER include/spdk/event.h 00:06:08.473 TEST_HEADER include/spdk/env.h 00:06:08.473 TEST_HEADER include/spdk/fd_group.h 00:06:08.473 TEST_HEADER include/spdk/fd.h 00:06:08.473 TEST_HEADER include/spdk/file.h 00:06:08.473 TEST_HEADER include/spdk/ftl.h 00:06:08.473 TEST_HEADER include/spdk/gpt_spec.h 00:06:08.473 TEST_HEADER include/spdk/hexlify.h 00:06:08.473 TEST_HEADER include/spdk/histogram_data.h 00:06:08.473 TEST_HEADER include/spdk/idxd.h 00:06:08.473 TEST_HEADER include/spdk/ioat.h 00:06:08.473 TEST_HEADER include/spdk/init.h 00:06:08.473 TEST_HEADER include/spdk/idxd_spec.h 00:06:08.473 TEST_HEADER include/spdk/ioat_spec.h 00:06:08.473 TEST_HEADER include/spdk/iscsi_spec.h 00:06:08.473 CC app/vhost/vhost.o 00:06:08.473 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:08.473 TEST_HEADER include/spdk/jsonrpc.h 00:06:08.473 TEST_HEADER include/spdk/keyring.h 00:06:08.473 TEST_HEADER include/spdk/keyring_module.h 00:06:08.473 TEST_HEADER include/spdk/json.h 00:06:08.473 TEST_HEADER include/spdk/likely.h 00:06:08.473 TEST_HEADER include/spdk/log.h 00:06:08.473 TEST_HEADER include/spdk/lvol.h 00:06:08.473 TEST_HEADER include/spdk/memory.h 00:06:08.473 TEST_HEADER include/spdk/mmio.h 00:06:08.473 CC app/nvmf_tgt/nvmf_main.o 00:06:08.473 TEST_HEADER include/spdk/nbd.h 00:06:08.473 TEST_HEADER include/spdk/notify.h 00:06:08.473 CC app/spdk_dd/spdk_dd.o 00:06:08.473 TEST_HEADER include/spdk/nvme.h 00:06:08.473 TEST_HEADER include/spdk/nvme_intel.h 00:06:08.473 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:08.473 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:08.473 TEST_HEADER include/spdk/nvme_spec.h 00:06:08.473 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:08.473 TEST_HEADER include/spdk/nvme_zns.h 00:06:08.473 TEST_HEADER include/spdk/nvmf.h 00:06:08.473 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:08.473 TEST_HEADER include/spdk/nvmf_spec.h 00:06:08.473 CC app/iscsi_tgt/iscsi_tgt.o 00:06:08.473 TEST_HEADER include/spdk/opal.h 00:06:08.473 TEST_HEADER include/spdk/nvmf_transport.h 00:06:08.473 TEST_HEADER include/spdk/opal_spec.h 00:06:08.473 TEST_HEADER include/spdk/pipe.h 00:06:08.473 TEST_HEADER include/spdk/pci_ids.h 00:06:08.473 TEST_HEADER include/spdk/reduce.h 00:06:08.473 TEST_HEADER include/spdk/queue.h 00:06:08.473 TEST_HEADER include/spdk/rpc.h 00:06:08.473 TEST_HEADER include/spdk/scheduler.h 00:06:08.473 TEST_HEADER include/spdk/scsi.h 00:06:08.473 TEST_HEADER include/spdk/scsi_spec.h 00:06:08.741 TEST_HEADER include/spdk/sock.h 00:06:08.741 TEST_HEADER include/spdk/stdinc.h 00:06:08.741 TEST_HEADER include/spdk/string.h 00:06:08.741 TEST_HEADER include/spdk/thread.h 00:06:08.741 CC app/spdk_tgt/spdk_tgt.o 00:06:08.741 TEST_HEADER include/spdk/trace.h 00:06:08.741 TEST_HEADER include/spdk/trace_parser.h 00:06:08.741 TEST_HEADER include/spdk/util.h 00:06:08.741 TEST_HEADER include/spdk/ublk.h 00:06:08.741 TEST_HEADER include/spdk/tree.h 00:06:08.741 TEST_HEADER include/spdk/uuid.h 00:06:08.741 TEST_HEADER include/spdk/version.h 00:06:08.741 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:08.741 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:08.741 TEST_HEADER include/spdk/vhost.h 00:06:08.741 TEST_HEADER include/spdk/vmd.h 00:06:08.741 TEST_HEADER include/spdk/xor.h 00:06:08.741 CXX test/cpp_headers/accel.o 00:06:08.741 TEST_HEADER include/spdk/zipf.h 00:06:08.741 CXX test/cpp_headers/accel_module.o 00:06:08.741 CXX test/cpp_headers/assert.o 00:06:08.741 CXX test/cpp_headers/barrier.o 00:06:08.741 CXX test/cpp_headers/base64.o 00:06:08.741 CXX test/cpp_headers/bdev.o 00:06:08.741 CXX test/cpp_headers/bdev_module.o 00:06:08.741 CXX test/cpp_headers/bdev_zone.o 00:06:08.741 CXX test/cpp_headers/bit_array.o 00:06:08.741 CXX test/cpp_headers/bit_pool.o 00:06:08.741 CXX test/cpp_headers/blob_bdev.o 00:06:08.741 CXX test/cpp_headers/blobfs_bdev.o 00:06:08.741 CXX test/cpp_headers/blobfs.o 00:06:08.741 CXX test/cpp_headers/blob.o 00:06:08.741 CXX test/cpp_headers/conf.o 00:06:08.741 CXX test/cpp_headers/config.o 00:06:08.741 CXX test/cpp_headers/cpuset.o 00:06:08.741 CXX test/cpp_headers/crc64.o 00:06:08.741 CXX test/cpp_headers/crc32.o 00:06:08.741 CXX test/cpp_headers/crc16.o 00:06:08.741 CXX test/cpp_headers/dif.o 00:06:08.741 CXX test/cpp_headers/dma.o 00:06:08.741 CC test/nvme/fdp/fdp.o 00:06:08.741 CC test/nvme/err_injection/err_injection.o 00:06:08.741 CC test/nvme/overhead/overhead.o 00:06:08.741 CC test/nvme/reserve/reserve.o 00:06:08.741 CC test/event/event_perf/event_perf.o 00:06:08.741 CC examples/vmd/lsvmd/lsvmd.o 00:06:08.741 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:08.741 CC test/nvme/reset/reset.o 00:06:08.741 CC test/nvme/aer/aer.o 00:06:08.741 CC test/event/reactor/reactor.o 00:06:08.741 CC examples/nvme/reconnect/reconnect.o 00:06:08.741 CC test/env/pci/pci_ut.o 00:06:08.741 CC test/nvme/boot_partition/boot_partition.o 00:06:08.741 CC examples/vmd/led/led.o 00:06:08.741 CC examples/util/zipf/zipf.o 00:06:08.741 CC test/nvme/compliance/nvme_compliance.o 00:06:08.741 CC test/nvme/sgl/sgl.o 00:06:08.741 CC examples/ioat/verify/verify.o 00:06:08.741 CC test/nvme/startup/startup.o 00:06:08.741 CC app/fio/nvme/fio_plugin.o 00:06:08.741 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:08.741 CC test/nvme/connect_stress/connect_stress.o 00:06:08.741 CC test/nvme/fused_ordering/fused_ordering.o 00:06:08.741 CC test/env/vtophys/vtophys.o 00:06:08.741 CC examples/idxd/perf/perf.o 00:06:08.741 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:08.741 CC examples/ioat/perf/perf.o 00:06:08.741 CC test/thread/poller_perf/poller_perf.o 00:06:08.741 CC examples/accel/perf/accel_perf.o 00:06:08.741 CC test/env/memory/memory_ut.o 00:06:08.741 CC test/event/reactor_perf/reactor_perf.o 00:06:08.741 CC test/app/histogram_perf/histogram_perf.o 00:06:08.741 CC examples/nvme/arbitration/arbitration.o 00:06:08.741 CC test/nvme/simple_copy/simple_copy.o 00:06:08.741 CC test/bdev/bdevio/bdevio.o 00:06:08.741 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:08.741 CC test/nvme/e2edp/nvme_dp.o 00:06:08.741 CC examples/nvme/abort/abort.o 00:06:08.741 CC test/app/stub/stub.o 00:06:08.741 CC test/nvme/cuse/cuse.o 00:06:08.741 CC examples/nvme/hello_world/hello_world.o 00:06:08.741 CC test/app/jsoncat/jsoncat.o 00:06:08.741 CC examples/nvme/hotplug/hotplug.o 00:06:08.741 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:08.741 CC examples/thread/thread/thread_ex.o 00:06:08.741 CC examples/sock/hello_world/hello_sock.o 00:06:08.741 CC test/event/app_repeat/app_repeat.o 00:06:08.741 CC examples/bdev/hello_world/hello_bdev.o 00:06:08.742 CC examples/blob/cli/blobcli.o 00:06:08.742 CC test/accel/dif/dif.o 00:06:08.742 CC test/blobfs/mkfs/mkfs.o 00:06:08.742 CC app/fio/bdev/fio_plugin.o 00:06:08.742 CC examples/blob/hello_world/hello_blob.o 00:06:08.742 CC test/event/scheduler/scheduler.o 00:06:08.742 CC examples/nvmf/nvmf/nvmf.o 00:06:08.742 CC test/dma/test_dma/test_dma.o 00:06:09.003 CC examples/bdev/bdevperf/bdevperf.o 00:06:09.003 LINK spdk_lspci 00:06:09.003 CC test/app/bdev_svc/bdev_svc.o 00:06:09.003 LINK rpc_client_test 00:06:09.003 LINK nvmf_tgt 00:06:09.003 CC test/lvol/esnap/esnap.o 00:06:09.003 CC test/env/mem_callbacks/mem_callbacks.o 00:06:09.003 LINK spdk_trace_record 00:06:09.003 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:09.271 LINK vhost 00:06:09.271 LINK spdk_nvme_discover 00:06:09.271 LINK interrupt_tgt 00:06:09.271 LINK lsvmd 00:06:09.271 LINK spdk_tgt 00:06:09.271 LINK event_perf 00:06:09.271 CXX test/cpp_headers/endian.o 00:06:09.271 CXX test/cpp_headers/env_dpdk.o 00:06:09.271 CXX test/cpp_headers/env.o 00:06:09.271 LINK histogram_perf 00:06:09.271 CXX test/cpp_headers/event.o 00:06:09.271 CXX test/cpp_headers/fd_group.o 00:06:09.271 LINK iscsi_tgt 00:06:09.271 CXX test/cpp_headers/fd.o 00:06:09.271 CXX test/cpp_headers/file.o 00:06:09.271 LINK err_injection 00:06:09.271 LINK app_repeat 00:06:09.271 LINK stub 00:06:09.271 LINK zipf 00:06:09.271 LINK reserve 00:06:09.272 CXX test/cpp_headers/ftl.o 00:06:09.272 LINK poller_perf 00:06:09.272 CXX test/cpp_headers/gpt_spec.o 00:06:09.272 LINK vtophys 00:06:09.272 LINK env_dpdk_post_init 00:06:09.272 LINK pmr_persistence 00:06:09.272 LINK reactor 00:06:09.272 LINK led 00:06:09.272 CXX test/cpp_headers/hexlify.o 00:06:09.272 LINK reactor_perf 00:06:09.272 CXX test/cpp_headers/histogram_data.o 00:06:09.272 CXX test/cpp_headers/idxd.o 00:06:09.272 LINK cmb_copy 00:06:09.272 LINK jsoncat 00:06:09.272 LINK boot_partition 00:06:09.272 LINK startup 00:06:09.272 LINK connect_stress 00:06:09.272 LINK hello_world 00:06:09.272 LINK reset 00:06:09.272 LINK bdev_svc 00:06:09.272 CXX test/cpp_headers/idxd_spec.o 00:06:09.272 CXX test/cpp_headers/init.o 00:06:09.272 CXX test/cpp_headers/ioat.o 00:06:09.272 CXX test/cpp_headers/ioat_spec.o 00:06:09.272 LINK sgl 00:06:09.272 CXX test/cpp_headers/iscsi_spec.o 00:06:09.272 LINK hello_sock 00:06:09.272 CXX test/cpp_headers/json.o 00:06:09.272 LINK scheduler 00:06:09.272 CXX test/cpp_headers/jsonrpc.o 00:06:09.272 LINK hello_bdev 00:06:09.272 LINK doorbell_aers 00:06:09.272 LINK verify 00:06:09.272 LINK fused_ordering 00:06:09.272 CXX test/cpp_headers/keyring.o 00:06:09.272 LINK aer 00:06:09.272 LINK mkfs 00:06:09.272 LINK simple_copy 00:06:09.272 CXX test/cpp_headers/keyring_module.o 00:06:09.539 LINK ioat_perf 00:06:09.539 LINK fdp 00:06:09.539 CXX test/cpp_headers/likely.o 00:06:09.539 LINK spdk_dd 00:06:09.539 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:09.539 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:09.539 LINK idxd_perf 00:06:09.539 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:09.539 LINK nvme_compliance 00:06:09.539 LINK nvme_dp 00:06:09.539 LINK hello_blob 00:06:09.539 CXX test/cpp_headers/log.o 00:06:09.539 LINK thread 00:06:09.539 LINK reconnect 00:06:09.539 LINK hotplug 00:06:09.539 LINK arbitration 00:06:09.539 LINK overhead 00:06:09.539 CXX test/cpp_headers/memory.o 00:06:09.539 CXX test/cpp_headers/lvol.o 00:06:09.539 CXX test/cpp_headers/mmio.o 00:06:09.539 LINK dif 00:06:09.539 CXX test/cpp_headers/nbd.o 00:06:09.539 CXX test/cpp_headers/notify.o 00:06:09.539 CXX test/cpp_headers/nvme.o 00:06:09.539 CXX test/cpp_headers/nvme_intel.o 00:06:09.539 CXX test/cpp_headers/nvme_ocssd.o 00:06:09.539 CXX test/cpp_headers/nvme_spec.o 00:06:09.539 CXX test/cpp_headers/nvme_zns.o 00:06:09.539 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:09.539 CXX test/cpp_headers/nvmf_cmd.o 00:06:09.539 LINK nvmf 00:06:09.539 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:09.539 CXX test/cpp_headers/nvmf.o 00:06:09.539 CXX test/cpp_headers/nvmf_spec.o 00:06:09.539 CXX test/cpp_headers/nvmf_transport.o 00:06:09.539 CXX test/cpp_headers/opal.o 00:06:09.539 CXX test/cpp_headers/opal_spec.o 00:06:09.539 CXX test/cpp_headers/pci_ids.o 00:06:09.539 CXX test/cpp_headers/pipe.o 00:06:09.539 CXX test/cpp_headers/reduce.o 00:06:09.539 CXX test/cpp_headers/rpc.o 00:06:09.539 CXX test/cpp_headers/queue.o 00:06:09.539 CXX test/cpp_headers/scheduler.o 00:06:09.539 LINK accel_perf 00:06:09.539 CXX test/cpp_headers/scsi.o 00:06:09.539 CXX test/cpp_headers/scsi_spec.o 00:06:09.539 LINK nvme_manage 00:06:09.539 LINK bdevio 00:06:09.539 CXX test/cpp_headers/sock.o 00:06:09.540 CXX test/cpp_headers/stdinc.o 00:06:09.540 CXX test/cpp_headers/string.o 00:06:09.540 LINK abort 00:06:09.540 LINK pci_ut 00:06:09.540 CXX test/cpp_headers/thread.o 00:06:09.540 CXX test/cpp_headers/trace.o 00:06:09.540 CXX test/cpp_headers/trace_parser.o 00:06:09.540 LINK test_dma 00:06:09.540 LINK spdk_trace 00:06:09.540 CXX test/cpp_headers/tree.o 00:06:09.540 CXX test/cpp_headers/ublk.o 00:06:09.800 CXX test/cpp_headers/util.o 00:06:09.800 CXX test/cpp_headers/uuid.o 00:06:09.801 CXX test/cpp_headers/version.o 00:06:09.801 LINK blobcli 00:06:09.801 CXX test/cpp_headers/vfio_user_spec.o 00:06:09.801 CXX test/cpp_headers/vfio_user_pci.o 00:06:09.801 CXX test/cpp_headers/vmd.o 00:06:09.801 CXX test/cpp_headers/vhost.o 00:06:09.801 CXX test/cpp_headers/xor.o 00:06:09.801 CXX test/cpp_headers/zipf.o 00:06:09.801 LINK nvme_fuzz 00:06:10.060 LINK spdk_nvme_perf 00:06:10.060 LINK spdk_top 00:06:10.060 LINK mem_callbacks 00:06:10.060 LINK spdk_nvme 00:06:10.060 LINK spdk_bdev 00:06:10.060 LINK spdk_nvme_identify 00:06:10.320 LINK vhost_fuzz 00:06:10.320 LINK bdevperf 00:06:10.320 LINK memory_ut 00:06:10.320 LINK cuse 00:06:10.892 LINK iscsi_fuzz 00:06:12.802 LINK esnap 00:06:13.062 00:06:13.062 real 0m43.567s 00:06:13.062 user 6m32.491s 00:06:13.062 sys 3m39.318s 00:06:13.062 08:17:59 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:06:13.062 08:17:59 make -- common/autotest_common.sh@10 -- $ set +x 00:06:13.062 ************************************ 00:06:13.062 END TEST make 00:06:13.062 ************************************ 00:06:13.062 08:17:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:13.062 08:17:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:13.062 08:17:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:13.062 08:17:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:13.062 08:17:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:06:13.062 08:17:59 -- pm/common@44 -- $ pid=28034 00:06:13.062 08:17:59 -- pm/common@50 -- $ kill -TERM 28034 00:06:13.062 08:17:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:13.062 08:17:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:06:13.062 08:17:59 -- pm/common@44 -- $ pid=28036 00:06:13.062 08:17:59 -- pm/common@50 -- $ kill -TERM 28036 00:06:13.062 08:17:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:13.062 08:17:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:06:13.062 08:17:59 -- pm/common@44 -- $ pid=28037 00:06:13.062 08:17:59 -- pm/common@50 -- $ kill -TERM 28037 00:06:13.062 08:17:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:13.062 08:17:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:06:13.062 08:17:59 -- pm/common@44 -- $ pid=28067 00:06:13.062 08:17:59 -- pm/common@50 -- $ sudo -E kill -TERM 28067 00:06:13.062 08:18:00 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:13.062 08:18:00 -- nvmf/common.sh@7 -- # uname -s 00:06:13.062 08:18:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.062 08:18:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.062 08:18:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.062 08:18:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.062 08:18:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.062 08:18:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.062 08:18:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.062 08:18:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.062 08:18:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.062 08:18:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.062 08:18:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:13.062 08:18:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:13.062 08:18:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.062 08:18:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.062 08:18:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:13.062 08:18:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.062 08:18:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:13.062 08:18:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.062 08:18:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.062 08:18:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.062 08:18:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.062 08:18:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.062 08:18:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.062 08:18:00 -- paths/export.sh@5 -- # export PATH 00:06:13.062 08:18:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.062 08:18:00 -- nvmf/common.sh@47 -- # : 0 00:06:13.062 08:18:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:13.062 08:18:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:13.062 08:18:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.062 08:18:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.062 08:18:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.062 08:18:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:13.063 08:18:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:13.063 08:18:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:13.323 08:18:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:13.323 08:18:00 -- spdk/autotest.sh@32 -- # uname -s 00:06:13.323 08:18:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:13.323 08:18:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:13.323 08:18:00 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:13.323 08:18:00 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:06:13.323 08:18:00 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:13.323 08:18:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:13.323 08:18:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:13.323 08:18:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:13.323 08:18:00 -- spdk/autotest.sh@48 -- # udevadm_pid=86998 00:06:13.323 08:18:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:13.323 08:18:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:13.323 08:18:00 -- pm/common@17 -- # local monitor 00:06:13.323 08:18:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:13.323 08:18:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:13.323 08:18:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:13.323 08:18:00 -- pm/common@21 -- # date +%s 00:06:13.323 08:18:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:13.323 08:18:00 -- pm/common@21 -- # date +%s 00:06:13.323 08:18:00 -- pm/common@25 -- # sleep 1 00:06:13.323 08:18:00 -- pm/common@21 -- # date +%s 00:06:13.323 08:18:00 -- pm/common@21 -- # date +%s 00:06:13.323 08:18:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715753880 00:06:13.323 08:18:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715753880 00:06:13.323 08:18:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715753880 00:06:13.323 08:18:00 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715753880 00:06:13.323 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715753880_collect-vmstat.pm.log 00:06:13.323 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715753880_collect-cpu-temp.pm.log 00:06:13.323 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715753880_collect-cpu-load.pm.log 00:06:13.323 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715753880_collect-bmc-pm.bmc.pm.log 00:06:14.294 08:18:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:14.294 08:18:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:14.294 08:18:01 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:14.294 08:18:01 -- common/autotest_common.sh@10 -- # set +x 00:06:14.294 08:18:01 -- spdk/autotest.sh@59 -- # create_test_list 00:06:14.294 08:18:01 -- common/autotest_common.sh@744 -- # xtrace_disable 00:06:14.294 08:18:01 -- common/autotest_common.sh@10 -- # set +x 00:06:14.294 08:18:01 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:06:14.294 08:18:01 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:14.294 08:18:01 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:14.294 08:18:01 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:14.294 08:18:01 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:14.294 08:18:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:14.294 08:18:01 -- common/autotest_common.sh@1451 -- # uname 00:06:14.294 08:18:01 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:06:14.294 08:18:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:14.294 08:18:01 -- common/autotest_common.sh@1471 -- # uname 00:06:14.294 08:18:01 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:06:14.294 08:18:01 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:06:14.294 08:18:01 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:06:14.294 08:18:01 -- spdk/autotest.sh@72 -- # hash lcov 00:06:14.294 08:18:01 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:14.294 08:18:01 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:06:14.294 --rc lcov_branch_coverage=1 00:06:14.294 --rc lcov_function_coverage=1 00:06:14.294 --rc genhtml_branch_coverage=1 00:06:14.294 --rc genhtml_function_coverage=1 00:06:14.294 --rc genhtml_legend=1 00:06:14.294 --rc geninfo_all_blocks=1 00:06:14.294 ' 00:06:14.294 08:18:01 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:06:14.294 --rc lcov_branch_coverage=1 00:06:14.294 --rc lcov_function_coverage=1 00:06:14.294 --rc genhtml_branch_coverage=1 00:06:14.294 --rc genhtml_function_coverage=1 00:06:14.294 --rc genhtml_legend=1 00:06:14.294 --rc geninfo_all_blocks=1 00:06:14.294 ' 00:06:14.294 08:18:01 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:06:14.294 --rc lcov_branch_coverage=1 00:06:14.294 --rc lcov_function_coverage=1 00:06:14.294 --rc genhtml_branch_coverage=1 00:06:14.294 --rc genhtml_function_coverage=1 00:06:14.294 --rc genhtml_legend=1 00:06:14.294 --rc geninfo_all_blocks=1 00:06:14.294 --no-external' 00:06:14.294 08:18:01 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:06:14.294 --rc lcov_branch_coverage=1 00:06:14.294 --rc lcov_function_coverage=1 00:06:14.294 --rc genhtml_branch_coverage=1 00:06:14.294 --rc genhtml_function_coverage=1 00:06:14.294 --rc genhtml_legend=1 00:06:14.294 --rc geninfo_all_blocks=1 00:06:14.294 --no-external' 00:06:14.294 08:18:01 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:06:14.294 lcov: LCOV version 1.14 00:06:14.294 08:18:01 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:06:24.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:24.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:06:24.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:06:24.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:06:24.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:06:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:06:24.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:06:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:06:36.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:36.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:06:36.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:36.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:06:36.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:36.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:06:36.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:36.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:06:36.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:36.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:06:36.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:36.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:06:36.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:36.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:06:36.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:36.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:06:36.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:36.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:06:36.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:36.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:36.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:06:36.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:36.849 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:36.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:36.849 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:06:36.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:36.849 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:06:36.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:36.849 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:06:36.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:36.849 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:06:37.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:37.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:06:38.491 08:18:25 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:06:38.491 08:18:25 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:38.491 08:18:25 -- common/autotest_common.sh@10 -- # set +x 00:06:38.491 08:18:25 -- spdk/autotest.sh@91 -- # rm -f 00:06:38.491 08:18:25 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:41.789 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:06:41.789 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:06:41.789 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:06:41.789 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:06:41.789 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:06:41.789 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:06:41.789 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:06:41.789 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:06:41.789 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:06:41.789 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:06:41.789 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:06:41.789 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:06:41.789 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:06:41.789 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:06:41.789 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:06:41.789 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:06:41.789 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:06:41.789 08:18:28 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:06:41.789 08:18:28 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:06:41.789 08:18:28 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:06:41.789 08:18:28 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:06:41.789 08:18:28 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:41.789 08:18:28 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:06:41.789 08:18:28 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:06:41.789 08:18:28 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:41.789 08:18:28 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:41.789 08:18:28 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:06:41.789 08:18:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:41.789 08:18:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:41.789 08:18:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:06:41.789 08:18:28 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:06:41.789 08:18:28 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:41.789 No valid GPT data, bailing 00:06:41.789 08:18:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:41.789 08:18:28 -- scripts/common.sh@391 -- # pt= 00:06:41.789 08:18:28 -- scripts/common.sh@392 -- # return 1 00:06:41.789 08:18:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:41.789 1+0 records in 00:06:41.789 1+0 records out 00:06:41.789 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00630577 s, 166 MB/s 00:06:41.789 08:18:28 -- spdk/autotest.sh@118 -- # sync 00:06:41.789 08:18:28 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:41.789 08:18:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:41.789 08:18:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:47.077 08:18:33 -- spdk/autotest.sh@124 -- # uname -s 00:06:47.077 08:18:33 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:06:47.077 08:18:33 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:06:47.077 08:18:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:47.077 08:18:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.077 08:18:33 -- common/autotest_common.sh@10 -- # set +x 00:06:47.077 ************************************ 00:06:47.077 START TEST setup.sh 00:06:47.077 ************************************ 00:06:47.077 08:18:33 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:06:47.077 * Looking for test storage... 00:06:47.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:47.077 08:18:33 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:06:47.077 08:18:33 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:06:47.077 08:18:33 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:06:47.077 08:18:33 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:47.077 08:18:33 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.077 08:18:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:47.077 ************************************ 00:06:47.077 START TEST acl 00:06:47.078 ************************************ 00:06:47.078 08:18:33 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:06:47.078 * Looking for test storage... 00:06:47.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:47.078 08:18:33 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:06:47.078 08:18:33 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:06:47.078 08:18:33 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:06:47.078 08:18:33 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:06:47.078 08:18:33 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:47.078 08:18:33 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:06:47.078 08:18:33 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:06:47.078 08:18:33 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:47.078 08:18:33 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:47.078 08:18:33 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:06:47.078 08:18:33 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:06:47.078 08:18:33 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:06:47.078 08:18:33 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:06:47.078 08:18:33 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:06:47.078 08:18:33 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:47.078 08:18:33 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:50.377 08:18:37 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:06:50.377 08:18:37 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:06:50.377 08:18:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:50.377 08:18:37 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:06:50.377 08:18:37 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:06:50.377 08:18:37 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:52.921 Hugepages 00:06:52.921 node hugesize free / total 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.921 00:06:52.921 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.921 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:06:52.922 08:18:39 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:06:52.922 08:18:39 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:52.922 08:18:39 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.922 08:18:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:53.183 ************************************ 00:06:53.183 START TEST denied 00:06:53.183 ************************************ 00:06:53.183 08:18:39 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:06:53.183 08:18:39 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:06:53.183 08:18:39 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:06:53.183 08:18:39 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:06:53.183 08:18:39 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:06:53.183 08:18:39 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:56.479 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:06:56.479 08:18:42 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:06:56.479 08:18:42 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:06:56.479 08:18:42 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:06:56.479 08:18:42 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:06:56.479 08:18:42 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:06:56.479 08:18:42 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:56.479 08:18:42 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:56.479 08:18:42 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:06:56.479 08:18:42 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:56.479 08:18:42 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:00.682 00:07:00.682 real 0m7.050s 00:07:00.682 user 0m2.274s 00:07:00.682 sys 0m4.059s 00:07:00.682 08:18:47 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.682 08:18:47 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:07:00.682 ************************************ 00:07:00.682 END TEST denied 00:07:00.682 ************************************ 00:07:00.682 08:18:47 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:07:00.682 08:18:47 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:00.682 08:18:47 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.682 08:18:47 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:00.682 ************************************ 00:07:00.682 START TEST allowed 00:07:00.682 ************************************ 00:07:00.682 08:18:47 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:07:00.682 08:18:47 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:07:00.682 08:18:47 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:07:00.682 08:18:47 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:07:00.682 08:18:47 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:07:00.682 08:18:47 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:03.979 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:07:03.979 08:18:50 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:07:03.979 08:18:50 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:07:03.979 08:18:50 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:07:03.979 08:18:50 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:03.979 08:18:50 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:07.277 00:07:07.277 real 0m6.999s 00:07:07.277 user 0m2.178s 00:07:07.277 sys 0m3.983s 00:07:07.277 08:18:54 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.277 08:18:54 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:07:07.277 ************************************ 00:07:07.277 END TEST allowed 00:07:07.277 ************************************ 00:07:07.277 00:07:07.277 real 0m20.294s 00:07:07.277 user 0m6.826s 00:07:07.277 sys 0m12.128s 00:07:07.277 08:18:54 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.277 08:18:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:07.277 ************************************ 00:07:07.277 END TEST acl 00:07:07.277 ************************************ 00:07:07.277 08:18:54 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:07:07.277 08:18:54 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:07.277 08:18:54 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.277 08:18:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:07.277 ************************************ 00:07:07.277 START TEST hugepages 00:07:07.277 ************************************ 00:07:07.277 08:18:54 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:07:07.277 * Looking for test storage... 00:07:07.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173351984 kB' 'MemAvailable: 177260984 kB' 'Buffers: 14492 kB' 'Cached: 10107168 kB' 'SwapCached: 0 kB' 'Active: 6537616 kB' 'Inactive: 4387996 kB' 'Active(anon): 5984728 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 807404 kB' 'Mapped: 145388 kB' 'Shmem: 5180776 kB' 'KReclaimable: 225132 kB' 'Slab: 644408 kB' 'SReclaimable: 225132 kB' 'SUnreclaim: 419276 kB' 'KernelStack: 20080 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982032 kB' 'Committed_AS: 8900028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311504 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.277 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.539 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:07.540 08:18:54 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:07:07.540 08:18:54 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:07.540 08:18:54 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.540 08:18:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:07.540 ************************************ 00:07:07.540 START TEST default_setup 00:07:07.540 ************************************ 00:07:07.540 08:18:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:07:07.540 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:07:07.541 08:18:54 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:10.835 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:10.835 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:10.835 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:10.835 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:10.835 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:10.835 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:10.835 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:10.835 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:10.835 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:10.835 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:10.835 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:10.835 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:10.835 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:10.835 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:10.835 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:10.835 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:11.094 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175496852 kB' 'MemAvailable: 179405256 kB' 'Buffers: 14492 kB' 'Cached: 10107268 kB' 'SwapCached: 0 kB' 'Active: 6557580 kB' 'Inactive: 4387996 kB' 'Active(anon): 6004692 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 827268 kB' 'Mapped: 145420 kB' 'Shmem: 5180876 kB' 'KReclaimable: 223940 kB' 'Slab: 641656 kB' 'SReclaimable: 223940 kB' 'SUnreclaim: 417716 kB' 'KernelStack: 20256 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8913588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311728 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.361 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:11.362 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175496868 kB' 'MemAvailable: 179405272 kB' 'Buffers: 14492 kB' 'Cached: 10107272 kB' 'SwapCached: 0 kB' 'Active: 6556460 kB' 'Inactive: 4387996 kB' 'Active(anon): 6003572 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 826024 kB' 'Mapped: 145388 kB' 'Shmem: 5180880 kB' 'KReclaimable: 223940 kB' 'Slab: 641840 kB' 'SReclaimable: 223940 kB' 'SUnreclaim: 417900 kB' 'KernelStack: 20192 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8913608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311712 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.363 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:11.364 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175497904 kB' 'MemAvailable: 179406308 kB' 'Buffers: 14492 kB' 'Cached: 10107288 kB' 'SwapCached: 0 kB' 'Active: 6556468 kB' 'Inactive: 4387996 kB' 'Active(anon): 6003580 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 826048 kB' 'Mapped: 145312 kB' 'Shmem: 5180896 kB' 'KReclaimable: 223940 kB' 'Slab: 641900 kB' 'SReclaimable: 223940 kB' 'SUnreclaim: 417960 kB' 'KernelStack: 20176 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8913628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311728 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.365 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.366 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:11.367 nr_hugepages=1024 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:11.367 resv_hugepages=0 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:11.367 surplus_hugepages=0 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:11.367 anon_hugepages=0 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175496468 kB' 'MemAvailable: 179404872 kB' 'Buffers: 14492 kB' 'Cached: 10107308 kB' 'SwapCached: 0 kB' 'Active: 6556288 kB' 'Inactive: 4387996 kB' 'Active(anon): 6003400 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 825828 kB' 'Mapped: 145312 kB' 'Shmem: 5180916 kB' 'KReclaimable: 223940 kB' 'Slab: 641900 kB' 'SReclaimable: 223940 kB' 'SUnreclaim: 417960 kB' 'KernelStack: 20128 kB' 'PageTables: 8832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8913648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311744 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.367 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:11.368 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:11.369 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:11.369 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:11.369 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:07:11.369 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:11.369 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:11.369 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85308372 kB' 'MemUsed: 12354312 kB' 'SwapCached: 0 kB' 'Active: 5005124 kB' 'Inactive: 4015732 kB' 'Active(anon): 4540060 kB' 'Inactive(anon): 0 kB' 'Active(file): 465064 kB' 'Inactive(file): 4015732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8465876 kB' 'Mapped: 118356 kB' 'AnonPages: 558164 kB' 'Shmem: 3985080 kB' 'KernelStack: 12168 kB' 'PageTables: 5724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103816 kB' 'Slab: 303284 kB' 'SReclaimable: 103816 kB' 'SUnreclaim: 199468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.631 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:11.632 node0=1024 expecting 1024 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:11.632 00:07:11.632 real 0m4.020s 00:07:11.632 user 0m1.337s 00:07:11.632 sys 0m1.932s 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.632 08:18:58 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:07:11.632 ************************************ 00:07:11.632 END TEST default_setup 00:07:11.632 ************************************ 00:07:11.632 08:18:58 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:07:11.632 08:18:58 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:11.632 08:18:58 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.632 08:18:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:11.632 ************************************ 00:07:11.632 START TEST per_node_1G_alloc 00:07:11.632 ************************************ 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:11.632 08:18:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:14.176 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:14.176 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:14.176 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:14.176 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:14.176 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:14.176 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:14.176 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:14.176 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:14.176 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:14.176 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:14.176 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:14.176 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:14.176 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:14.176 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:14.441 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:14.441 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:14.441 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:14.441 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:07:14.441 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:07:14.441 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:07:14.441 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:14.441 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:14.441 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:14.441 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:14.441 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:14.441 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:14.441 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:14.441 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175509532 kB' 'MemAvailable: 179417976 kB' 'Buffers: 14492 kB' 'Cached: 10107412 kB' 'SwapCached: 0 kB' 'Active: 6556708 kB' 'Inactive: 4387996 kB' 'Active(anon): 6003820 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 826056 kB' 'Mapped: 144128 kB' 'Shmem: 5181020 kB' 'KReclaimable: 224020 kB' 'Slab: 641576 kB' 'SReclaimable: 224020 kB' 'SUnreclaim: 417556 kB' 'KernelStack: 20000 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8902928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311680 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.442 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175510688 kB' 'MemAvailable: 179419112 kB' 'Buffers: 14492 kB' 'Cached: 10107416 kB' 'SwapCached: 0 kB' 'Active: 6556404 kB' 'Inactive: 4387996 kB' 'Active(anon): 6003516 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 825808 kB' 'Mapped: 144108 kB' 'Shmem: 5181024 kB' 'KReclaimable: 223980 kB' 'Slab: 641532 kB' 'SReclaimable: 223980 kB' 'SUnreclaim: 417552 kB' 'KernelStack: 20000 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8902948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311664 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.443 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.444 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175511508 kB' 'MemAvailable: 179419932 kB' 'Buffers: 14492 kB' 'Cached: 10107432 kB' 'SwapCached: 0 kB' 'Active: 6556396 kB' 'Inactive: 4387996 kB' 'Active(anon): 6003508 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 825792 kB' 'Mapped: 144108 kB' 'Shmem: 5181040 kB' 'KReclaimable: 223980 kB' 'Slab: 641516 kB' 'SReclaimable: 223980 kB' 'SUnreclaim: 417536 kB' 'KernelStack: 20000 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8902968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311664 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.445 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.446 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:14.447 nr_hugepages=1024 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:14.447 resv_hugepages=0 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:14.447 surplus_hugepages=0 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:14.447 anon_hugepages=0 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175511508 kB' 'MemAvailable: 179419932 kB' 'Buffers: 14492 kB' 'Cached: 10107432 kB' 'SwapCached: 0 kB' 'Active: 6556396 kB' 'Inactive: 4387996 kB' 'Active(anon): 6003508 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 825792 kB' 'Mapped: 144108 kB' 'Shmem: 5181040 kB' 'KReclaimable: 223980 kB' 'Slab: 641516 kB' 'SReclaimable: 223980 kB' 'SUnreclaim: 417536 kB' 'KernelStack: 20000 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8902992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311664 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.447 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.448 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.449 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86365464 kB' 'MemUsed: 11297220 kB' 'SwapCached: 0 kB' 'Active: 5003504 kB' 'Inactive: 4015732 kB' 'Active(anon): 4538440 kB' 'Inactive(anon): 0 kB' 'Active(file): 465064 kB' 'Inactive(file): 4015732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8465956 kB' 'Mapped: 117152 kB' 'AnonPages: 556472 kB' 'Shmem: 3985160 kB' 'KernelStack: 12024 kB' 'PageTables: 5264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103928 kB' 'Slab: 303504 kB' 'SReclaimable: 103928 kB' 'SUnreclaim: 199576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.711 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718480 kB' 'MemFree: 89146620 kB' 'MemUsed: 4571860 kB' 'SwapCached: 0 kB' 'Active: 1553732 kB' 'Inactive: 372264 kB' 'Active(anon): 1465908 kB' 'Inactive(anon): 0 kB' 'Active(file): 87824 kB' 'Inactive(file): 372264 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1656016 kB' 'Mapped: 26956 kB' 'AnonPages: 270144 kB' 'Shmem: 1195928 kB' 'KernelStack: 7976 kB' 'PageTables: 3016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120052 kB' 'Slab: 338012 kB' 'SReclaimable: 120052 kB' 'SUnreclaim: 217960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:14.712 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.713 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:14.714 node0=512 expecting 512 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:07:14.714 node1=512 expecting 512 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:14.714 00:07:14.714 real 0m3.035s 00:07:14.714 user 0m1.217s 00:07:14.714 sys 0m1.882s 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.714 08:19:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:14.714 ************************************ 00:07:14.714 END TEST per_node_1G_alloc 00:07:14.714 ************************************ 00:07:14.714 08:19:01 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:07:14.714 08:19:01 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:14.714 08:19:01 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.714 08:19:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:14.714 ************************************ 00:07:14.714 START TEST even_2G_alloc 00:07:14.714 ************************************ 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:14.714 08:19:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:17.257 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:17.257 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:17.257 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:17.257 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:17.257 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:17.257 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:17.257 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:17.257 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:17.257 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:17.522 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:17.522 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:17.522 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:17.522 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:17.522 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:17.522 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:17.522 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:17.522 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175471732 kB' 'MemAvailable: 179380156 kB' 'Buffers: 14492 kB' 'Cached: 10107568 kB' 'SwapCached: 0 kB' 'Active: 6560856 kB' 'Inactive: 4387996 kB' 'Active(anon): 6007968 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 829728 kB' 'Mapped: 144228 kB' 'Shmem: 5181176 kB' 'KReclaimable: 223980 kB' 'Slab: 641440 kB' 'SReclaimable: 223980 kB' 'SUnreclaim: 417460 kB' 'KernelStack: 20000 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8903736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311648 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:17.522 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.523 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175471656 kB' 'MemAvailable: 179380080 kB' 'Buffers: 14492 kB' 'Cached: 10107572 kB' 'SwapCached: 0 kB' 'Active: 6560216 kB' 'Inactive: 4387996 kB' 'Active(anon): 6007328 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 829472 kB' 'Mapped: 144128 kB' 'Shmem: 5181180 kB' 'KReclaimable: 223980 kB' 'Slab: 641432 kB' 'SReclaimable: 223980 kB' 'SUnreclaim: 417452 kB' 'KernelStack: 20000 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8903756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311632 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.524 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.525 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175472236 kB' 'MemAvailable: 179380660 kB' 'Buffers: 14492 kB' 'Cached: 10107588 kB' 'SwapCached: 0 kB' 'Active: 6560224 kB' 'Inactive: 4387996 kB' 'Active(anon): 6007336 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 829480 kB' 'Mapped: 144128 kB' 'Shmem: 5181196 kB' 'KReclaimable: 223980 kB' 'Slab: 641432 kB' 'SReclaimable: 223980 kB' 'SUnreclaim: 417452 kB' 'KernelStack: 20000 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8903776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311632 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.526 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.527 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:17.528 nr_hugepages=1024 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:17.528 resv_hugepages=0 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:17.528 surplus_hugepages=0 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:17.528 anon_hugepages=0 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:17.528 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175472740 kB' 'MemAvailable: 179381164 kB' 'Buffers: 14492 kB' 'Cached: 10107588 kB' 'SwapCached: 0 kB' 'Active: 6560224 kB' 'Inactive: 4387996 kB' 'Active(anon): 6007336 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 829496 kB' 'Mapped: 144128 kB' 'Shmem: 5181196 kB' 'KReclaimable: 223980 kB' 'Slab: 641432 kB' 'SReclaimable: 223980 kB' 'SUnreclaim: 417452 kB' 'KernelStack: 20000 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8903800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311632 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.529 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.793 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86345804 kB' 'MemUsed: 11316880 kB' 'SwapCached: 0 kB' 'Active: 5004088 kB' 'Inactive: 4015732 kB' 'Active(anon): 4539024 kB' 'Inactive(anon): 0 kB' 'Active(file): 465064 kB' 'Inactive(file): 4015732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8466072 kB' 'Mapped: 117172 kB' 'AnonPages: 556428 kB' 'Shmem: 3985276 kB' 'KernelStack: 12024 kB' 'PageTables: 5264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103960 kB' 'Slab: 303688 kB' 'SReclaimable: 103960 kB' 'SUnreclaim: 199728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.794 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718480 kB' 'MemFree: 89127496 kB' 'MemUsed: 4590984 kB' 'SwapCached: 0 kB' 'Active: 1556856 kB' 'Inactive: 372264 kB' 'Active(anon): 1469032 kB' 'Inactive(anon): 0 kB' 'Active(file): 87824 kB' 'Inactive(file): 372264 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1656048 kB' 'Mapped: 26956 kB' 'AnonPages: 273208 kB' 'Shmem: 1195960 kB' 'KernelStack: 7976 kB' 'PageTables: 2972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120020 kB' 'Slab: 337744 kB' 'SReclaimable: 120020 kB' 'SUnreclaim: 217724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.795 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.796 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:17.797 node0=512 expecting 512 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:07:17.797 node1=512 expecting 512 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:17.797 00:07:17.797 real 0m3.023s 00:07:17.797 user 0m1.224s 00:07:17.797 sys 0m1.862s 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.797 08:19:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:17.797 ************************************ 00:07:17.797 END TEST even_2G_alloc 00:07:17.797 ************************************ 00:07:17.797 08:19:04 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:07:17.797 08:19:04 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:17.797 08:19:04 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.797 08:19:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:17.797 ************************************ 00:07:17.797 START TEST odd_alloc 00:07:17.797 ************************************ 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:17.797 08:19:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:20.338 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:20.338 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:20.338 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:20.604 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:20.604 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:20.604 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:20.604 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:20.604 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:20.604 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:20.604 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:20.604 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:20.604 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:20.604 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:20.604 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:20.604 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:20.604 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:20.604 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175473196 kB' 'MemAvailable: 179381620 kB' 'Buffers: 14492 kB' 'Cached: 10107728 kB' 'SwapCached: 0 kB' 'Active: 6564344 kB' 'Inactive: 4387996 kB' 'Active(anon): 6011456 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 833396 kB' 'Mapped: 144160 kB' 'Shmem: 5181336 kB' 'KReclaimable: 223980 kB' 'Slab: 641836 kB' 'SReclaimable: 223980 kB' 'SUnreclaim: 417856 kB' 'KernelStack: 20000 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 8904280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311632 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.604 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.605 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175473984 kB' 'MemAvailable: 179382408 kB' 'Buffers: 14492 kB' 'Cached: 10107732 kB' 'SwapCached: 0 kB' 'Active: 6564120 kB' 'Inactive: 4387996 kB' 'Active(anon): 6011232 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 833160 kB' 'Mapped: 144144 kB' 'Shmem: 5181340 kB' 'KReclaimable: 223980 kB' 'Slab: 641836 kB' 'SReclaimable: 223980 kB' 'SUnreclaim: 417856 kB' 'KernelStack: 20000 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 8904296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311600 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.606 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175473736 kB' 'MemAvailable: 179382160 kB' 'Buffers: 14492 kB' 'Cached: 10107732 kB' 'SwapCached: 0 kB' 'Active: 6564132 kB' 'Inactive: 4387996 kB' 'Active(anon): 6011244 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 833172 kB' 'Mapped: 144144 kB' 'Shmem: 5181340 kB' 'KReclaimable: 223980 kB' 'Slab: 641928 kB' 'SReclaimable: 223980 kB' 'SUnreclaim: 417948 kB' 'KernelStack: 20016 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 8904320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311616 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.607 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.608 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:20.609 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:20.873 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:07:20.874 nr_hugepages=1025 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:20.874 resv_hugepages=0 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:20.874 surplus_hugepages=0 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:20.874 anon_hugepages=0 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175475032 kB' 'MemAvailable: 179383456 kB' 'Buffers: 14492 kB' 'Cached: 10107768 kB' 'SwapCached: 0 kB' 'Active: 6564556 kB' 'Inactive: 4387996 kB' 'Active(anon): 6011668 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 833640 kB' 'Mapped: 144144 kB' 'Shmem: 5181376 kB' 'KReclaimable: 223980 kB' 'Slab: 641928 kB' 'SReclaimable: 223980 kB' 'SUnreclaim: 417948 kB' 'KernelStack: 20000 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 8904340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311616 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.874 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:20.875 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86339120 kB' 'MemUsed: 11323564 kB' 'SwapCached: 0 kB' 'Active: 5004180 kB' 'Inactive: 4015732 kB' 'Active(anon): 4539116 kB' 'Inactive(anon): 0 kB' 'Active(file): 465064 kB' 'Inactive(file): 4015732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8466192 kB' 'Mapped: 117188 kB' 'AnonPages: 556864 kB' 'Shmem: 3985396 kB' 'KernelStack: 12024 kB' 'PageTables: 5264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103960 kB' 'Slab: 303744 kB' 'SReclaimable: 103960 kB' 'SUnreclaim: 199784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.876 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718480 kB' 'MemFree: 89136164 kB' 'MemUsed: 4582316 kB' 'SwapCached: 0 kB' 'Active: 1560524 kB' 'Inactive: 372264 kB' 'Active(anon): 1472700 kB' 'Inactive(anon): 0 kB' 'Active(file): 87824 kB' 'Inactive(file): 372264 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1656088 kB' 'Mapped: 26956 kB' 'AnonPages: 276864 kB' 'Shmem: 1196000 kB' 'KernelStack: 7976 kB' 'PageTables: 3024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120020 kB' 'Slab: 338184 kB' 'SReclaimable: 120020 kB' 'SUnreclaim: 218164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.877 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:07:20.878 node0=512 expecting 513 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:07:20.878 node1=513 expecting 512 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:07:20.878 00:07:20.878 real 0m3.030s 00:07:20.878 user 0m1.214s 00:07:20.878 sys 0m1.881s 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.878 08:19:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:20.878 ************************************ 00:07:20.878 END TEST odd_alloc 00:07:20.878 ************************************ 00:07:20.878 08:19:07 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:07:20.878 08:19:07 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:20.878 08:19:07 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.878 08:19:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:20.878 ************************************ 00:07:20.878 START TEST custom_alloc 00:07:20.878 ************************************ 00:07:20.878 08:19:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:07:20.878 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:07:20.878 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:07:20.878 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:07:20.878 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:07:20.878 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:07:20.878 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:07:20.878 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:07:20.878 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:20.878 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:20.878 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:20.878 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:20.878 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:20.878 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:20.879 08:19:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:24.189 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:24.189 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:24.189 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:24.189 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:24.189 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:24.189 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:24.189 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:24.189 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:24.189 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:24.189 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:24.189 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:24.189 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:24.189 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:24.189 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:24.189 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:24.189 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:24.189 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.189 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174459004 kB' 'MemAvailable: 178367440 kB' 'Buffers: 14492 kB' 'Cached: 10107872 kB' 'SwapCached: 0 kB' 'Active: 6568712 kB' 'Inactive: 4387996 kB' 'Active(anon): 6015824 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 837852 kB' 'Mapped: 144208 kB' 'Shmem: 5181480 kB' 'KReclaimable: 224004 kB' 'Slab: 642024 kB' 'SReclaimable: 224004 kB' 'SUnreclaim: 418020 kB' 'KernelStack: 20032 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 8904320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311632 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.190 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174459368 kB' 'MemAvailable: 178367784 kB' 'Buffers: 14492 kB' 'Cached: 10107876 kB' 'SwapCached: 0 kB' 'Active: 6568560 kB' 'Inactive: 4387996 kB' 'Active(anon): 6015672 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 837584 kB' 'Mapped: 144172 kB' 'Shmem: 5181484 kB' 'KReclaimable: 223964 kB' 'Slab: 642084 kB' 'SReclaimable: 223964 kB' 'SUnreclaim: 418120 kB' 'KernelStack: 20016 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 8907000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311600 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.191 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.192 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174459168 kB' 'MemAvailable: 178367580 kB' 'Buffers: 14492 kB' 'Cached: 10107892 kB' 'SwapCached: 0 kB' 'Active: 6568264 kB' 'Inactive: 4387996 kB' 'Active(anon): 6015376 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 837252 kB' 'Mapped: 144172 kB' 'Shmem: 5181500 kB' 'KReclaimable: 223956 kB' 'Slab: 642076 kB' 'SReclaimable: 223956 kB' 'SUnreclaim: 418120 kB' 'KernelStack: 19984 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 8905676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311632 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.193 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.194 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:07:24.195 nr_hugepages=1536 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:24.195 resv_hugepages=0 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:24.195 surplus_hugepages=0 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:24.195 anon_hugepages=0 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174457244 kB' 'MemAvailable: 178365656 kB' 'Buffers: 14492 kB' 'Cached: 10107912 kB' 'SwapCached: 0 kB' 'Active: 6568280 kB' 'Inactive: 4387996 kB' 'Active(anon): 6015392 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 837264 kB' 'Mapped: 144172 kB' 'Shmem: 5181520 kB' 'KReclaimable: 223956 kB' 'Slab: 642076 kB' 'SReclaimable: 223956 kB' 'SUnreclaim: 418120 kB' 'KernelStack: 20096 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 8918068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311744 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.195 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.196 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86342828 kB' 'MemUsed: 11319856 kB' 'SwapCached: 0 kB' 'Active: 5003644 kB' 'Inactive: 4015732 kB' 'Active(anon): 4538580 kB' 'Inactive(anon): 0 kB' 'Active(file): 465064 kB' 'Inactive(file): 4015732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8466260 kB' 'Mapped: 117216 kB' 'AnonPages: 556276 kB' 'Shmem: 3985464 kB' 'KernelStack: 12008 kB' 'PageTables: 5216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103928 kB' 'Slab: 303908 kB' 'SReclaimable: 103928 kB' 'SUnreclaim: 199980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.197 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.198 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718480 kB' 'MemFree: 88114208 kB' 'MemUsed: 5604272 kB' 'SwapCached: 0 kB' 'Active: 1564348 kB' 'Inactive: 372264 kB' 'Active(anon): 1476524 kB' 'Inactive(anon): 0 kB' 'Active(file): 87824 kB' 'Inactive(file): 372264 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1656144 kB' 'Mapped: 26956 kB' 'AnonPages: 280720 kB' 'Shmem: 1196056 kB' 'KernelStack: 8072 kB' 'PageTables: 2788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120028 kB' 'Slab: 338136 kB' 'SReclaimable: 120028 kB' 'SUnreclaim: 218108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.199 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:24.200 node0=512 expecting 512 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:07:24.200 node1=1024 expecting 1024 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:07:24.200 00:07:24.200 real 0m3.051s 00:07:24.200 user 0m1.218s 00:07:24.200 sys 0m1.886s 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.200 08:19:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:24.200 ************************************ 00:07:24.200 END TEST custom_alloc 00:07:24.200 ************************************ 00:07:24.200 08:19:10 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:07:24.200 08:19:10 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:24.200 08:19:10 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.200 08:19:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:24.200 ************************************ 00:07:24.200 START TEST no_shrink_alloc 00:07:24.200 ************************************ 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:24.200 08:19:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:26.740 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:26.740 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:26.740 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:26.740 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:26.740 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:26.740 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:26.740 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:26.740 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:26.740 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:26.740 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:26.740 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:26.740 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:26.740 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:26.740 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:26.740 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:26.740 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:26.740 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175481128 kB' 'MemAvailable: 179389496 kB' 'Buffers: 14492 kB' 'Cached: 10108024 kB' 'SwapCached: 0 kB' 'Active: 6572844 kB' 'Inactive: 4387996 kB' 'Active(anon): 6019956 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 842120 kB' 'Mapped: 144716 kB' 'Shmem: 5181632 kB' 'KReclaimable: 223868 kB' 'Slab: 642924 kB' 'SReclaimable: 223868 kB' 'SUnreclaim: 419056 kB' 'KernelStack: 20224 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8909784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311808 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.005 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.006 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175481088 kB' 'MemAvailable: 179389456 kB' 'Buffers: 14492 kB' 'Cached: 10108028 kB' 'SwapCached: 0 kB' 'Active: 6577456 kB' 'Inactive: 4387996 kB' 'Active(anon): 6024568 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 846320 kB' 'Mapped: 144680 kB' 'Shmem: 5181636 kB' 'KReclaimable: 223868 kB' 'Slab: 642968 kB' 'SReclaimable: 223868 kB' 'SUnreclaim: 419100 kB' 'KernelStack: 20144 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8914140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311732 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.007 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.008 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175480316 kB' 'MemAvailable: 179388684 kB' 'Buffers: 14492 kB' 'Cached: 10108048 kB' 'SwapCached: 0 kB' 'Active: 6572296 kB' 'Inactive: 4387996 kB' 'Active(anon): 6019408 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 841144 kB' 'Mapped: 144604 kB' 'Shmem: 5181656 kB' 'KReclaimable: 223868 kB' 'Slab: 642968 kB' 'SReclaimable: 223868 kB' 'SUnreclaim: 419100 kB' 'KernelStack: 20240 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8908044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311760 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.009 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.010 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:27.011 nr_hugepages=1024 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:27.011 resv_hugepages=0 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:27.011 surplus_hugepages=0 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:27.011 anon_hugepages=0 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175482016 kB' 'MemAvailable: 179390384 kB' 'Buffers: 14492 kB' 'Cached: 10108068 kB' 'SwapCached: 0 kB' 'Active: 6571272 kB' 'Inactive: 4387996 kB' 'Active(anon): 6018384 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 840112 kB' 'Mapped: 144184 kB' 'Shmem: 5181676 kB' 'KReclaimable: 223868 kB' 'Slab: 642968 kB' 'SReclaimable: 223868 kB' 'SUnreclaim: 419100 kB' 'KernelStack: 19968 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8905400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311616 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.011 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.012 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85281636 kB' 'MemUsed: 12381048 kB' 'SwapCached: 0 kB' 'Active: 5003988 kB' 'Inactive: 4015732 kB' 'Active(anon): 4538924 kB' 'Inactive(anon): 0 kB' 'Active(file): 465064 kB' 'Inactive(file): 4015732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8466340 kB' 'Mapped: 117228 kB' 'AnonPages: 556536 kB' 'Shmem: 3985544 kB' 'KernelStack: 12024 kB' 'PageTables: 5264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103880 kB' 'Slab: 304224 kB' 'SReclaimable: 103880 kB' 'SUnreclaim: 200344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.013 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:27.014 node0=1024 expecting 1024 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:27.014 08:19:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:29.555 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:29.555 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:29.555 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:29.555 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:29.555 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:29.555 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:29.818 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:29.818 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:29.818 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:29.818 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:29.818 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:29.818 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:29.818 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:29.818 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:29.818 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:29.818 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:29.818 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:29.818 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:07:29.818 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:07:29.818 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:07:29.818 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:29.818 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:29.818 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:29.818 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:29.818 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:29.818 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:29.818 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:29.818 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175454992 kB' 'MemAvailable: 179363376 kB' 'Buffers: 14492 kB' 'Cached: 10108168 kB' 'SwapCached: 0 kB' 'Active: 6575324 kB' 'Inactive: 4387996 kB' 'Active(anon): 6022436 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 843632 kB' 'Mapped: 144300 kB' 'Shmem: 5181776 kB' 'KReclaimable: 223900 kB' 'Slab: 642960 kB' 'SReclaimable: 223900 kB' 'SUnreclaim: 419060 kB' 'KernelStack: 20000 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8906160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311744 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.819 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.820 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175455792 kB' 'MemAvailable: 179364176 kB' 'Buffers: 14492 kB' 'Cached: 10108172 kB' 'SwapCached: 0 kB' 'Active: 6574644 kB' 'Inactive: 4387996 kB' 'Active(anon): 6021756 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 843376 kB' 'Mapped: 144192 kB' 'Shmem: 5181780 kB' 'KReclaimable: 223900 kB' 'Slab: 642928 kB' 'SReclaimable: 223900 kB' 'SUnreclaim: 419028 kB' 'KernelStack: 20000 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8906180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311712 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.821 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.822 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.823 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175455036 kB' 'MemAvailable: 179363420 kB' 'Buffers: 14492 kB' 'Cached: 10108188 kB' 'SwapCached: 0 kB' 'Active: 6574656 kB' 'Inactive: 4387996 kB' 'Active(anon): 6021768 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 843424 kB' 'Mapped: 144192 kB' 'Shmem: 5181796 kB' 'KReclaimable: 223900 kB' 'Slab: 642928 kB' 'SReclaimable: 223900 kB' 'SUnreclaim: 419028 kB' 'KernelStack: 20000 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8906200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311712 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.824 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.825 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:29.826 nr_hugepages=1024 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:29.826 resv_hugepages=0 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:29.826 surplus_hugepages=0 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:29.826 anon_hugepages=0 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:29.826 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:30.089 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:30.089 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:30.089 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:30.089 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:30.089 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:30.089 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:30.089 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:30.089 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.089 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.089 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175455916 kB' 'MemAvailable: 179364300 kB' 'Buffers: 14492 kB' 'Cached: 10108232 kB' 'SwapCached: 0 kB' 'Active: 6574880 kB' 'Inactive: 4387996 kB' 'Active(anon): 6021992 kB' 'Inactive(anon): 0 kB' 'Active(file): 552888 kB' 'Inactive(file): 4387996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 843532 kB' 'Mapped: 144192 kB' 'Shmem: 5181840 kB' 'KReclaimable: 223900 kB' 'Slab: 642928 kB' 'SReclaimable: 223900 kB' 'SUnreclaim: 419028 kB' 'KernelStack: 19984 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8906224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311712 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533460 kB' 'DirectMap2M: 10680320 kB' 'DirectMap1G: 190840832 kB' 00:07:30.089 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.090 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.091 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85262824 kB' 'MemUsed: 12399860 kB' 'SwapCached: 0 kB' 'Active: 5003608 kB' 'Inactive: 4015732 kB' 'Active(anon): 4538544 kB' 'Inactive(anon): 0 kB' 'Active(file): 465064 kB' 'Inactive(file): 4015732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8466340 kB' 'Mapped: 117236 kB' 'AnonPages: 556160 kB' 'Shmem: 3985544 kB' 'KernelStack: 11992 kB' 'PageTables: 5168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103912 kB' 'Slab: 304004 kB' 'SReclaimable: 103912 kB' 'SUnreclaim: 200092 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.092 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.093 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:30.094 node0=1024 expecting 1024 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:30.094 00:07:30.094 real 0m6.000s 00:07:30.094 user 0m2.464s 00:07:30.094 sys 0m3.659s 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.094 08:19:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:30.094 ************************************ 00:07:30.094 END TEST no_shrink_alloc 00:07:30.094 ************************************ 00:07:30.094 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:07:30.094 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:07:30.094 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:30.094 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:30.094 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:30.094 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:30.094 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:30.094 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:30.094 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:30.094 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:30.094 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:30.094 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:30.094 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:30.094 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:30.094 00:07:30.094 real 0m22.749s 00:07:30.094 user 0m8.934s 00:07:30.094 sys 0m13.451s 00:07:30.094 08:19:16 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.094 08:19:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:30.094 ************************************ 00:07:30.094 END TEST hugepages 00:07:30.094 ************************************ 00:07:30.094 08:19:16 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:07:30.095 08:19:16 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:30.095 08:19:16 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.095 08:19:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:30.095 ************************************ 00:07:30.095 START TEST driver 00:07:30.095 ************************************ 00:07:30.095 08:19:17 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:07:30.095 * Looking for test storage... 00:07:30.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:07:30.355 08:19:17 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:07:30.355 08:19:17 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:30.355 08:19:17 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:34.555 08:19:21 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:07:34.555 08:19:21 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:34.555 08:19:21 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.555 08:19:21 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:34.555 ************************************ 00:07:34.556 START TEST guess_driver 00:07:34.556 ************************************ 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:07:34.556 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:07:34.556 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:07:34.556 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:07:34.556 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:07:34.556 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:07:34.556 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:07:34.556 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:07:34.556 Looking for driver=vfio-pci 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:07:34.556 08:19:21 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:37.094 08:19:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.094 08:19:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.094 08:19:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.094 08:19:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.094 08:19:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.094 08:19:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.094 08:19:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.094 08:19:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.094 08:19:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.094 08:19:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.094 08:19:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.094 08:19:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.094 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.354 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.354 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.354 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.354 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.354 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.354 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.354 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.354 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.354 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.925 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.925 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.925 08:19:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:38.185 08:19:25 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:07:38.185 08:19:25 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:07:38.185 08:19:25 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:38.185 08:19:25 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:42.391 00:07:42.391 real 0m7.864s 00:07:42.391 user 0m2.284s 00:07:42.391 sys 0m4.061s 00:07:42.391 08:19:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.391 08:19:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:07:42.391 ************************************ 00:07:42.391 END TEST guess_driver 00:07:42.391 ************************************ 00:07:42.391 00:07:42.391 real 0m12.082s 00:07:42.391 user 0m3.524s 00:07:42.391 sys 0m6.230s 00:07:42.391 08:19:29 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.391 08:19:29 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:42.391 ************************************ 00:07:42.391 END TEST driver 00:07:42.391 ************************************ 00:07:42.391 08:19:29 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:07:42.391 08:19:29 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:42.391 08:19:29 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.391 08:19:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:42.391 ************************************ 00:07:42.391 START TEST devices 00:07:42.391 ************************************ 00:07:42.391 08:19:29 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:07:42.391 * Looking for test storage... 00:07:42.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:07:42.391 08:19:29 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:07:42.391 08:19:29 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:07:42.391 08:19:29 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:42.391 08:19:29 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:07:45.691 08:19:32 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:07:45.691 08:19:32 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:07:45.691 08:19:32 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:07:45.691 08:19:32 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:45.691 08:19:32 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:07:45.691 08:19:32 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:07:45.691 08:19:32 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:45.691 08:19:32 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:07:45.691 08:19:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:07:45.691 08:19:32 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:07:45.691 No valid GPT data, bailing 00:07:45.691 08:19:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:45.691 08:19:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:45.691 08:19:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:07:45.691 08:19:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:45.691 08:19:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:45.691 08:19:32 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:07:45.691 08:19:32 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:07:45.691 08:19:32 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:45.691 08:19:32 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.691 08:19:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:45.691 ************************************ 00:07:45.691 START TEST nvme_mount 00:07:45.691 ************************************ 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:45.691 08:19:32 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:07:46.633 Creating new GPT entries in memory. 00:07:46.633 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:46.633 other utilities. 00:07:46.633 08:19:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:46.633 08:19:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:46.633 08:19:33 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:46.633 08:19:33 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:46.633 08:19:33 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:47.573 Creating new GPT entries in memory. 00:07:47.573 The operation has completed successfully. 00:07:47.573 08:19:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:47.573 08:19:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:47.573 08:19:34 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 119431 00:07:47.573 08:19:34 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:47.573 08:19:34 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:07:47.573 08:19:34 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:47.573 08:19:34 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:07:47.573 08:19:34 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:07:47.834 08:19:34 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:47.834 08:19:34 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:47.834 08:19:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:07:47.834 08:19:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:07:47.834 08:19:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:47.834 08:19:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:47.834 08:19:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:47.834 08:19:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:47.834 08:19:34 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:47.834 08:19:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:47.834 08:19:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.834 08:19:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:07:47.834 08:19:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:47.834 08:19:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:47.834 08:19:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.379 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.380 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.380 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.380 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.380 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.380 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.640 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:50.640 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:50.640 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:50.640 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:50.640 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:50.640 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:07:50.640 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:50.640 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:50.640 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:50.640 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:50.640 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:50.640 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:50.640 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:50.902 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:07:50.902 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:07:50.902 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:50.902 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:50.902 08:19:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:54.199 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:54.199 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:07:54.199 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:54.199 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:54.199 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:54.199 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:54.199 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:54.199 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:54.200 08:19:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:56.744 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:56.744 00:07:56.744 real 0m11.070s 00:07:56.744 user 0m3.307s 00:07:56.744 sys 0m5.589s 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:56.744 08:19:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:07:56.744 ************************************ 00:07:56.744 END TEST nvme_mount 00:07:56.744 ************************************ 00:07:56.744 08:19:43 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:07:56.744 08:19:43 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:56.744 08:19:43 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:56.744 08:19:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:56.744 ************************************ 00:07:56.744 START TEST dm_mount 00:07:56.744 ************************************ 00:07:56.744 08:19:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:07:56.744 08:19:43 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:07:56.744 08:19:43 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:07:56.744 08:19:43 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:07:56.744 08:19:43 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:07:56.744 08:19:43 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:56.744 08:19:43 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:07:56.744 08:19:43 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:56.745 08:19:43 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:56.745 08:19:43 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:07:56.745 08:19:43 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:07:56.745 08:19:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:56.745 08:19:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:56.745 08:19:43 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:56.745 08:19:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:56.745 08:19:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:56.745 08:19:43 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:56.745 08:19:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:56.745 08:19:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:56.745 08:19:43 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:56.745 08:19:43 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:56.745 08:19:43 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:07:57.685 Creating new GPT entries in memory. 00:07:57.685 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:57.685 other utilities. 00:07:57.685 08:19:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:57.685 08:19:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:57.686 08:19:44 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:57.686 08:19:44 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:57.686 08:19:44 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:59.068 Creating new GPT entries in memory. 00:07:59.068 The operation has completed successfully. 00:07:59.068 08:19:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:59.068 08:19:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:59.069 08:19:45 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:59.069 08:19:45 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:59.069 08:19:45 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:08:00.010 The operation has completed successfully. 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 123614 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:00.010 08:19:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:02.554 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:02.815 08:19:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:05.360 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:05.621 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:05.621 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:05.621 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:08:05.621 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:08:05.621 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:05.621 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:05.621 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:08:05.621 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:05.621 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:08:05.621 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:05.621 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:05.621 08:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:08:05.621 00:08:05.621 real 0m8.905s 00:08:05.621 user 0m2.130s 00:08:05.621 sys 0m3.792s 00:08:05.621 08:19:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:05.621 08:19:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:08:05.621 ************************************ 00:08:05.621 END TEST dm_mount 00:08:05.621 ************************************ 00:08:05.621 08:19:52 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:08:05.621 08:19:52 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:08:05.621 08:19:52 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:08:05.621 08:19:52 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:05.621 08:19:52 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:05.621 08:19:52 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:05.621 08:19:52 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:05.882 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:08:05.882 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:08:05.882 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:05.882 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:05.882 08:19:52 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:08:05.882 08:19:52 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:05.882 08:19:52 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:05.882 08:19:52 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:05.882 08:19:52 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:05.882 08:19:52 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:08:05.882 08:19:52 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:08:05.882 00:08:05.882 real 0m23.675s 00:08:05.882 user 0m6.745s 00:08:05.882 sys 0m11.650s 00:08:05.882 08:19:52 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:05.882 08:19:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:08:05.882 ************************************ 00:08:05.882 END TEST devices 00:08:05.882 ************************************ 00:08:05.882 00:08:05.882 real 1m19.193s 00:08:05.882 user 0m26.181s 00:08:05.882 sys 0m43.714s 00:08:05.882 08:19:52 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:05.882 08:19:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:05.882 ************************************ 00:08:05.882 END TEST setup.sh 00:08:05.882 ************************************ 00:08:06.142 08:19:52 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:08:08.681 Hugepages 00:08:08.681 node hugesize free / total 00:08:08.681 node0 1048576kB 0 / 0 00:08:08.681 node0 2048kB 2048 / 2048 00:08:08.681 node1 1048576kB 0 / 0 00:08:08.681 node1 2048kB 0 / 0 00:08:08.681 00:08:08.681 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:08.681 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:08:08.681 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:08:08.681 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:08:08.681 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:08:08.681 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:08:08.681 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:08:08.681 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:08:08.681 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:08:08.941 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:08:08.941 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:08:08.941 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:08:08.941 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:08:08.941 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:08:08.941 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:08:08.941 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:08:08.941 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:08:08.941 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:08:08.941 08:19:55 -- spdk/autotest.sh@130 -- # uname -s 00:08:08.941 08:19:55 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:08:08.941 08:19:55 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:08:08.941 08:19:55 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:12.238 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:12.238 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:12.238 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:12.238 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:12.238 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:12.238 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:12.238 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:12.238 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:12.238 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:12.238 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:12.238 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:12.238 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:12.238 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:12.238 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:12.238 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:12.238 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:12.498 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:08:12.758 08:19:59 -- common/autotest_common.sh@1528 -- # sleep 1 00:08:13.697 08:20:00 -- common/autotest_common.sh@1529 -- # bdfs=() 00:08:13.697 08:20:00 -- common/autotest_common.sh@1529 -- # local bdfs 00:08:13.697 08:20:00 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:08:13.697 08:20:00 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:08:13.697 08:20:00 -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:13.697 08:20:00 -- common/autotest_common.sh@1509 -- # local bdfs 00:08:13.697 08:20:00 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:13.697 08:20:00 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:08:13.697 08:20:00 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:13.697 08:20:00 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:08:13.697 08:20:00 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5e:00.0 00:08:13.697 08:20:00 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:16.991 Waiting for block devices as requested 00:08:16.991 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:08:16.991 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:16.991 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:16.991 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:16.991 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:16.991 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:16.991 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:16.991 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:17.250 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:17.250 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:17.250 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:17.510 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:17.510 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:17.510 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:17.510 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:17.769 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:17.769 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:17.769 08:20:04 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:08:17.769 08:20:04 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:08:17.769 08:20:04 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:08:17.769 08:20:04 -- common/autotest_common.sh@1498 -- # grep 0000:5e:00.0/nvme/nvme 00:08:17.769 08:20:04 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:08:17.769 08:20:04 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:08:17.769 08:20:04 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:08:17.769 08:20:04 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:08:17.769 08:20:04 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:08:17.769 08:20:04 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:08:17.769 08:20:04 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:08:17.769 08:20:04 -- common/autotest_common.sh@1541 -- # grep oacs 00:08:17.769 08:20:04 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:08:17.769 08:20:04 -- common/autotest_common.sh@1541 -- # oacs=' 0xe' 00:08:17.769 08:20:04 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:08:17.769 08:20:04 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:08:17.769 08:20:04 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:08:17.769 08:20:04 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:08:17.769 08:20:04 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:08:17.769 08:20:04 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:08:17.769 08:20:04 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:08:17.769 08:20:04 -- common/autotest_common.sh@1553 -- # continue 00:08:17.769 08:20:04 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:08:17.769 08:20:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.769 08:20:04 -- common/autotest_common.sh@10 -- # set +x 00:08:18.029 08:20:04 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:08:18.029 08:20:04 -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:18.029 08:20:04 -- common/autotest_common.sh@10 -- # set +x 00:08:18.029 08:20:04 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:20.568 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:20.568 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:20.568 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:20.568 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:20.568 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:20.568 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:20.568 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:20.828 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:20.828 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:20.828 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:20.828 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:20.828 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:20.828 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:20.828 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:20.828 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:20.828 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:21.768 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:08:21.768 08:20:08 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:08:21.768 08:20:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.768 08:20:08 -- common/autotest_common.sh@10 -- # set +x 00:08:21.768 08:20:08 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:08:21.768 08:20:08 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:08:21.768 08:20:08 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:08:21.768 08:20:08 -- common/autotest_common.sh@1573 -- # bdfs=() 00:08:21.768 08:20:08 -- common/autotest_common.sh@1573 -- # local bdfs 00:08:21.768 08:20:08 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:08:21.768 08:20:08 -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:21.768 08:20:08 -- common/autotest_common.sh@1509 -- # local bdfs 00:08:21.768 08:20:08 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:21.768 08:20:08 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:21.768 08:20:08 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:08:21.768 08:20:08 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:08:21.768 08:20:08 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5e:00.0 00:08:21.768 08:20:08 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:08:21.768 08:20:08 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:08:21.768 08:20:08 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:08:21.768 08:20:08 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:08:21.768 08:20:08 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:08:21.768 08:20:08 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:5e:00.0 00:08:21.768 08:20:08 -- common/autotest_common.sh@1588 -- # [[ -z 0000:5e:00.0 ]] 00:08:21.768 08:20:08 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=132423 00:08:21.768 08:20:08 -- common/autotest_common.sh@1594 -- # waitforlisten 132423 00:08:21.768 08:20:08 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:21.768 08:20:08 -- common/autotest_common.sh@827 -- # '[' -z 132423 ']' 00:08:21.768 08:20:08 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.768 08:20:08 -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:21.768 08:20:08 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.768 08:20:08 -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:21.768 08:20:08 -- common/autotest_common.sh@10 -- # set +x 00:08:21.768 [2024-05-15 08:20:08.788027] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:08:21.768 [2024-05-15 08:20:08.788078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132423 ] 00:08:22.027 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.027 [2024-05-15 08:20:08.858780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.027 [2024-05-15 08:20:08.936853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.596 08:20:09 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:22.596 08:20:09 -- common/autotest_common.sh@860 -- # return 0 00:08:22.596 08:20:09 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:08:22.596 08:20:09 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:08:22.596 08:20:09 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:08:25.886 nvme0n1 00:08:25.886 08:20:12 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:08:25.886 [2024-05-15 08:20:12.741322] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:08:25.886 request: 00:08:25.886 { 00:08:25.886 "nvme_ctrlr_name": "nvme0", 00:08:25.886 "password": "test", 00:08:25.886 "method": "bdev_nvme_opal_revert", 00:08:25.886 "req_id": 1 00:08:25.886 } 00:08:25.886 Got JSON-RPC error response 00:08:25.886 response: 00:08:25.886 { 00:08:25.886 "code": -32602, 00:08:25.886 "message": "Invalid parameters" 00:08:25.886 } 00:08:25.886 08:20:12 -- common/autotest_common.sh@1600 -- # true 00:08:25.886 08:20:12 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:08:25.886 08:20:12 -- common/autotest_common.sh@1604 -- # killprocess 132423 00:08:25.886 08:20:12 -- common/autotest_common.sh@946 -- # '[' -z 132423 ']' 00:08:25.886 08:20:12 -- common/autotest_common.sh@950 -- # kill -0 132423 00:08:25.886 08:20:12 -- common/autotest_common.sh@951 -- # uname 00:08:25.886 08:20:12 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:25.886 08:20:12 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 132423 00:08:25.886 08:20:12 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:25.886 08:20:12 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:25.886 08:20:12 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 132423' 00:08:25.886 killing process with pid 132423 00:08:25.886 08:20:12 -- common/autotest_common.sh@965 -- # kill 132423 00:08:25.886 08:20:12 -- common/autotest_common.sh@970 -- # wait 132423 00:08:27.792 08:20:14 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:08:27.792 08:20:14 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:08:27.792 08:20:14 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:08:27.792 08:20:14 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:08:27.792 08:20:14 -- spdk/autotest.sh@162 -- # timing_enter lib 00:08:27.792 08:20:14 -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:27.792 08:20:14 -- common/autotest_common.sh@10 -- # set +x 00:08:27.792 08:20:14 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:27.792 08:20:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:27.792 08:20:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.792 08:20:14 -- common/autotest_common.sh@10 -- # set +x 00:08:27.792 ************************************ 00:08:27.792 START TEST env 00:08:27.792 ************************************ 00:08:27.792 08:20:14 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:27.792 * Looking for test storage... 00:08:27.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:08:27.792 08:20:14 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:27.792 08:20:14 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:27.792 08:20:14 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.792 08:20:14 env -- common/autotest_common.sh@10 -- # set +x 00:08:27.792 ************************************ 00:08:27.792 START TEST env_memory 00:08:27.792 ************************************ 00:08:27.792 08:20:14 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:27.792 00:08:27.792 00:08:27.792 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.792 http://cunit.sourceforge.net/ 00:08:27.792 00:08:27.792 00:08:27.792 Suite: memory 00:08:27.792 Test: alloc and free memory map ...[2024-05-15 08:20:14.637206] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:27.792 passed 00:08:27.792 Test: mem map translation ...[2024-05-15 08:20:14.655627] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:27.792 [2024-05-15 08:20:14.655639] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:27.792 [2024-05-15 08:20:14.655673] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:27.792 [2024-05-15 08:20:14.655679] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:27.792 passed 00:08:27.792 Test: mem map registration ...[2024-05-15 08:20:14.692276] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:27.792 [2024-05-15 08:20:14.692289] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:27.792 passed 00:08:27.792 Test: mem map adjacent registrations ...passed 00:08:27.792 00:08:27.792 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.792 suites 1 1 n/a 0 0 00:08:27.792 tests 4 4 4 0 0 00:08:27.792 asserts 152 152 152 0 n/a 00:08:27.792 00:08:27.792 Elapsed time = 0.137 seconds 00:08:27.792 00:08:27.792 real 0m0.149s 00:08:27.792 user 0m0.140s 00:08:27.792 sys 0m0.009s 00:08:27.792 08:20:14 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.792 08:20:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:27.792 ************************************ 00:08:27.792 END TEST env_memory 00:08:27.792 ************************************ 00:08:27.792 08:20:14 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:27.792 08:20:14 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:27.792 08:20:14 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.792 08:20:14 env -- common/autotest_common.sh@10 -- # set +x 00:08:27.792 ************************************ 00:08:27.792 START TEST env_vtophys 00:08:27.792 ************************************ 00:08:27.792 08:20:14 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:28.053 EAL: lib.eal log level changed from notice to debug 00:08:28.053 EAL: Detected lcore 0 as core 0 on socket 0 00:08:28.053 EAL: Detected lcore 1 as core 1 on socket 0 00:08:28.053 EAL: Detected lcore 2 as core 2 on socket 0 00:08:28.053 EAL: Detected lcore 3 as core 3 on socket 0 00:08:28.053 EAL: Detected lcore 4 as core 4 on socket 0 00:08:28.053 EAL: Detected lcore 5 as core 5 on socket 0 00:08:28.053 EAL: Detected lcore 6 as core 6 on socket 0 00:08:28.053 EAL: Detected lcore 7 as core 8 on socket 0 00:08:28.053 EAL: Detected lcore 8 as core 9 on socket 0 00:08:28.053 EAL: Detected lcore 9 as core 10 on socket 0 00:08:28.053 EAL: Detected lcore 10 as core 11 on socket 0 00:08:28.053 EAL: Detected lcore 11 as core 12 on socket 0 00:08:28.053 EAL: Detected lcore 12 as core 13 on socket 0 00:08:28.053 EAL: Detected lcore 13 as core 16 on socket 0 00:08:28.053 EAL: Detected lcore 14 as core 17 on socket 0 00:08:28.053 EAL: Detected lcore 15 as core 18 on socket 0 00:08:28.053 EAL: Detected lcore 16 as core 19 on socket 0 00:08:28.053 EAL: Detected lcore 17 as core 20 on socket 0 00:08:28.053 EAL: Detected lcore 18 as core 21 on socket 0 00:08:28.053 EAL: Detected lcore 19 as core 25 on socket 0 00:08:28.053 EAL: Detected lcore 20 as core 26 on socket 0 00:08:28.053 EAL: Detected lcore 21 as core 27 on socket 0 00:08:28.053 EAL: Detected lcore 22 as core 28 on socket 0 00:08:28.053 EAL: Detected lcore 23 as core 29 on socket 0 00:08:28.053 EAL: Detected lcore 24 as core 0 on socket 1 00:08:28.053 EAL: Detected lcore 25 as core 1 on socket 1 00:08:28.053 EAL: Detected lcore 26 as core 2 on socket 1 00:08:28.053 EAL: Detected lcore 27 as core 3 on socket 1 00:08:28.053 EAL: Detected lcore 28 as core 4 on socket 1 00:08:28.053 EAL: Detected lcore 29 as core 5 on socket 1 00:08:28.053 EAL: Detected lcore 30 as core 6 on socket 1 00:08:28.053 EAL: Detected lcore 31 as core 9 on socket 1 00:08:28.053 EAL: Detected lcore 32 as core 10 on socket 1 00:08:28.053 EAL: Detected lcore 33 as core 11 on socket 1 00:08:28.053 EAL: Detected lcore 34 as core 12 on socket 1 00:08:28.053 EAL: Detected lcore 35 as core 13 on socket 1 00:08:28.053 EAL: Detected lcore 36 as core 16 on socket 1 00:08:28.053 EAL: Detected lcore 37 as core 17 on socket 1 00:08:28.053 EAL: Detected lcore 38 as core 18 on socket 1 00:08:28.053 EAL: Detected lcore 39 as core 19 on socket 1 00:08:28.053 EAL: Detected lcore 40 as core 20 on socket 1 00:08:28.053 EAL: Detected lcore 41 as core 21 on socket 1 00:08:28.053 EAL: Detected lcore 42 as core 24 on socket 1 00:08:28.053 EAL: Detected lcore 43 as core 25 on socket 1 00:08:28.053 EAL: Detected lcore 44 as core 26 on socket 1 00:08:28.053 EAL: Detected lcore 45 as core 27 on socket 1 00:08:28.053 EAL: Detected lcore 46 as core 28 on socket 1 00:08:28.053 EAL: Detected lcore 47 as core 29 on socket 1 00:08:28.053 EAL: Detected lcore 48 as core 0 on socket 0 00:08:28.053 EAL: Detected lcore 49 as core 1 on socket 0 00:08:28.053 EAL: Detected lcore 50 as core 2 on socket 0 00:08:28.053 EAL: Detected lcore 51 as core 3 on socket 0 00:08:28.053 EAL: Detected lcore 52 as core 4 on socket 0 00:08:28.053 EAL: Detected lcore 53 as core 5 on socket 0 00:08:28.053 EAL: Detected lcore 54 as core 6 on socket 0 00:08:28.053 EAL: Detected lcore 55 as core 8 on socket 0 00:08:28.053 EAL: Detected lcore 56 as core 9 on socket 0 00:08:28.053 EAL: Detected lcore 57 as core 10 on socket 0 00:08:28.053 EAL: Detected lcore 58 as core 11 on socket 0 00:08:28.053 EAL: Detected lcore 59 as core 12 on socket 0 00:08:28.053 EAL: Detected lcore 60 as core 13 on socket 0 00:08:28.053 EAL: Detected lcore 61 as core 16 on socket 0 00:08:28.053 EAL: Detected lcore 62 as core 17 on socket 0 00:08:28.053 EAL: Detected lcore 63 as core 18 on socket 0 00:08:28.053 EAL: Detected lcore 64 as core 19 on socket 0 00:08:28.053 EAL: Detected lcore 65 as core 20 on socket 0 00:08:28.053 EAL: Detected lcore 66 as core 21 on socket 0 00:08:28.053 EAL: Detected lcore 67 as core 25 on socket 0 00:08:28.053 EAL: Detected lcore 68 as core 26 on socket 0 00:08:28.053 EAL: Detected lcore 69 as core 27 on socket 0 00:08:28.053 EAL: Detected lcore 70 as core 28 on socket 0 00:08:28.053 EAL: Detected lcore 71 as core 29 on socket 0 00:08:28.053 EAL: Detected lcore 72 as core 0 on socket 1 00:08:28.053 EAL: Detected lcore 73 as core 1 on socket 1 00:08:28.053 EAL: Detected lcore 74 as core 2 on socket 1 00:08:28.053 EAL: Detected lcore 75 as core 3 on socket 1 00:08:28.053 EAL: Detected lcore 76 as core 4 on socket 1 00:08:28.053 EAL: Detected lcore 77 as core 5 on socket 1 00:08:28.053 EAL: Detected lcore 78 as core 6 on socket 1 00:08:28.053 EAL: Detected lcore 79 as core 9 on socket 1 00:08:28.053 EAL: Detected lcore 80 as core 10 on socket 1 00:08:28.053 EAL: Detected lcore 81 as core 11 on socket 1 00:08:28.053 EAL: Detected lcore 82 as core 12 on socket 1 00:08:28.053 EAL: Detected lcore 83 as core 13 on socket 1 00:08:28.053 EAL: Detected lcore 84 as core 16 on socket 1 00:08:28.053 EAL: Detected lcore 85 as core 17 on socket 1 00:08:28.053 EAL: Detected lcore 86 as core 18 on socket 1 00:08:28.053 EAL: Detected lcore 87 as core 19 on socket 1 00:08:28.053 EAL: Detected lcore 88 as core 20 on socket 1 00:08:28.053 EAL: Detected lcore 89 as core 21 on socket 1 00:08:28.053 EAL: Detected lcore 90 as core 24 on socket 1 00:08:28.053 EAL: Detected lcore 91 as core 25 on socket 1 00:08:28.053 EAL: Detected lcore 92 as core 26 on socket 1 00:08:28.053 EAL: Detected lcore 93 as core 27 on socket 1 00:08:28.053 EAL: Detected lcore 94 as core 28 on socket 1 00:08:28.053 EAL: Detected lcore 95 as core 29 on socket 1 00:08:28.053 EAL: Maximum logical cores by configuration: 128 00:08:28.053 EAL: Detected CPU lcores: 96 00:08:28.053 EAL: Detected NUMA nodes: 2 00:08:28.053 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:08:28.053 EAL: Detected shared linkage of DPDK 00:08:28.053 EAL: No shared files mode enabled, IPC will be disabled 00:08:28.053 EAL: Bus pci wants IOVA as 'DC' 00:08:28.053 EAL: Buses did not request a specific IOVA mode. 00:08:28.053 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:28.053 EAL: Selected IOVA mode 'VA' 00:08:28.053 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.053 EAL: Probing VFIO support... 00:08:28.053 EAL: IOMMU type 1 (Type 1) is supported 00:08:28.053 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:28.053 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:28.053 EAL: VFIO support initialized 00:08:28.053 EAL: Ask a virtual area of 0x2e000 bytes 00:08:28.053 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:28.053 EAL: Setting up physically contiguous memory... 00:08:28.053 EAL: Setting maximum number of open files to 524288 00:08:28.053 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:28.053 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:28.053 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:28.053 EAL: Ask a virtual area of 0x61000 bytes 00:08:28.053 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:28.053 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:28.053 EAL: Ask a virtual area of 0x400000000 bytes 00:08:28.053 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:28.053 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:28.053 EAL: Ask a virtual area of 0x61000 bytes 00:08:28.053 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:28.053 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:28.053 EAL: Ask a virtual area of 0x400000000 bytes 00:08:28.053 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:28.053 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:28.053 EAL: Ask a virtual area of 0x61000 bytes 00:08:28.053 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:28.053 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:28.054 EAL: Ask a virtual area of 0x400000000 bytes 00:08:28.054 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:28.054 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:28.054 EAL: Ask a virtual area of 0x61000 bytes 00:08:28.054 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:28.054 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:28.054 EAL: Ask a virtual area of 0x400000000 bytes 00:08:28.054 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:28.054 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:28.054 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:28.054 EAL: Ask a virtual area of 0x61000 bytes 00:08:28.054 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:28.054 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:28.054 EAL: Ask a virtual area of 0x400000000 bytes 00:08:28.054 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:28.054 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:28.054 EAL: Ask a virtual area of 0x61000 bytes 00:08:28.054 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:28.054 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:28.054 EAL: Ask a virtual area of 0x400000000 bytes 00:08:28.054 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:28.054 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:28.054 EAL: Ask a virtual area of 0x61000 bytes 00:08:28.054 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:28.054 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:28.054 EAL: Ask a virtual area of 0x400000000 bytes 00:08:28.054 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:28.054 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:28.054 EAL: Ask a virtual area of 0x61000 bytes 00:08:28.054 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:28.054 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:28.054 EAL: Ask a virtual area of 0x400000000 bytes 00:08:28.054 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:28.054 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:28.054 EAL: Hugepages will be freed exactly as allocated. 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: TSC frequency is ~2300000 KHz 00:08:28.054 EAL: Main lcore 0 is ready (tid=7fee96f1ea00;cpuset=[0]) 00:08:28.054 EAL: Trying to obtain current memory policy. 00:08:28.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.054 EAL: Restoring previous memory policy: 0 00:08:28.054 EAL: request: mp_malloc_sync 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: Heap on socket 0 was expanded by 2MB 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:28.054 EAL: Mem event callback 'spdk:(nil)' registered 00:08:28.054 00:08:28.054 00:08:28.054 CUnit - A unit testing framework for C - Version 2.1-3 00:08:28.054 http://cunit.sourceforge.net/ 00:08:28.054 00:08:28.054 00:08:28.054 Suite: components_suite 00:08:28.054 Test: vtophys_malloc_test ...passed 00:08:28.054 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:28.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.054 EAL: Restoring previous memory policy: 4 00:08:28.054 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.054 EAL: request: mp_malloc_sync 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: Heap on socket 0 was expanded by 4MB 00:08:28.054 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.054 EAL: request: mp_malloc_sync 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: Heap on socket 0 was shrunk by 4MB 00:08:28.054 EAL: Trying to obtain current memory policy. 00:08:28.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.054 EAL: Restoring previous memory policy: 4 00:08:28.054 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.054 EAL: request: mp_malloc_sync 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: Heap on socket 0 was expanded by 6MB 00:08:28.054 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.054 EAL: request: mp_malloc_sync 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: Heap on socket 0 was shrunk by 6MB 00:08:28.054 EAL: Trying to obtain current memory policy. 00:08:28.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.054 EAL: Restoring previous memory policy: 4 00:08:28.054 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.054 EAL: request: mp_malloc_sync 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: Heap on socket 0 was expanded by 10MB 00:08:28.054 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.054 EAL: request: mp_malloc_sync 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: Heap on socket 0 was shrunk by 10MB 00:08:28.054 EAL: Trying to obtain current memory policy. 00:08:28.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.054 EAL: Restoring previous memory policy: 4 00:08:28.054 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.054 EAL: request: mp_malloc_sync 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: Heap on socket 0 was expanded by 18MB 00:08:28.054 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.054 EAL: request: mp_malloc_sync 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: Heap on socket 0 was shrunk by 18MB 00:08:28.054 EAL: Trying to obtain current memory policy. 00:08:28.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.054 EAL: Restoring previous memory policy: 4 00:08:28.054 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.054 EAL: request: mp_malloc_sync 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: Heap on socket 0 was expanded by 34MB 00:08:28.054 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.054 EAL: request: mp_malloc_sync 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: Heap on socket 0 was shrunk by 34MB 00:08:28.054 EAL: Trying to obtain current memory policy. 00:08:28.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.054 EAL: Restoring previous memory policy: 4 00:08:28.054 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.054 EAL: request: mp_malloc_sync 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: Heap on socket 0 was expanded by 66MB 00:08:28.054 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.054 EAL: request: mp_malloc_sync 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: Heap on socket 0 was shrunk by 66MB 00:08:28.054 EAL: Trying to obtain current memory policy. 00:08:28.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.054 EAL: Restoring previous memory policy: 4 00:08:28.054 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.054 EAL: request: mp_malloc_sync 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: Heap on socket 0 was expanded by 130MB 00:08:28.054 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.054 EAL: request: mp_malloc_sync 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: Heap on socket 0 was shrunk by 130MB 00:08:28.054 EAL: Trying to obtain current memory policy. 00:08:28.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.054 EAL: Restoring previous memory policy: 4 00:08:28.054 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.054 EAL: request: mp_malloc_sync 00:08:28.054 EAL: No shared files mode enabled, IPC is disabled 00:08:28.054 EAL: Heap on socket 0 was expanded by 258MB 00:08:28.314 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.314 EAL: request: mp_malloc_sync 00:08:28.314 EAL: No shared files mode enabled, IPC is disabled 00:08:28.314 EAL: Heap on socket 0 was shrunk by 258MB 00:08:28.314 EAL: Trying to obtain current memory policy. 00:08:28.314 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.314 EAL: Restoring previous memory policy: 4 00:08:28.314 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.314 EAL: request: mp_malloc_sync 00:08:28.314 EAL: No shared files mode enabled, IPC is disabled 00:08:28.314 EAL: Heap on socket 0 was expanded by 514MB 00:08:28.314 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.574 EAL: request: mp_malloc_sync 00:08:28.574 EAL: No shared files mode enabled, IPC is disabled 00:08:28.574 EAL: Heap on socket 0 was shrunk by 514MB 00:08:28.574 EAL: Trying to obtain current memory policy. 00:08:28.574 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.574 EAL: Restoring previous memory policy: 4 00:08:28.574 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.574 EAL: request: mp_malloc_sync 00:08:28.574 EAL: No shared files mode enabled, IPC is disabled 00:08:28.574 EAL: Heap on socket 0 was expanded by 1026MB 00:08:28.834 EAL: Calling mem event callback 'spdk:(nil)' 00:08:29.094 EAL: request: mp_malloc_sync 00:08:29.094 EAL: No shared files mode enabled, IPC is disabled 00:08:29.094 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:29.094 passed 00:08:29.094 00:08:29.094 Run Summary: Type Total Ran Passed Failed Inactive 00:08:29.094 suites 1 1 n/a 0 0 00:08:29.094 tests 2 2 2 0 0 00:08:29.094 asserts 497 497 497 0 n/a 00:08:29.094 00:08:29.094 Elapsed time = 0.970 seconds 00:08:29.094 EAL: Calling mem event callback 'spdk:(nil)' 00:08:29.094 EAL: request: mp_malloc_sync 00:08:29.094 EAL: No shared files mode enabled, IPC is disabled 00:08:29.094 EAL: Heap on socket 0 was shrunk by 2MB 00:08:29.094 EAL: No shared files mode enabled, IPC is disabled 00:08:29.094 EAL: No shared files mode enabled, IPC is disabled 00:08:29.094 EAL: No shared files mode enabled, IPC is disabled 00:08:29.094 00:08:29.094 real 0m1.094s 00:08:29.094 user 0m0.641s 00:08:29.094 sys 0m0.426s 00:08:29.095 08:20:15 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:29.095 08:20:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:29.095 ************************************ 00:08:29.095 END TEST env_vtophys 00:08:29.095 ************************************ 00:08:29.095 08:20:15 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:29.095 08:20:15 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:29.095 08:20:15 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:29.095 08:20:15 env -- common/autotest_common.sh@10 -- # set +x 00:08:29.095 ************************************ 00:08:29.095 START TEST env_pci 00:08:29.095 ************************************ 00:08:29.095 08:20:15 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:29.095 00:08:29.095 00:08:29.095 CUnit - A unit testing framework for C - Version 2.1-3 00:08:29.095 http://cunit.sourceforge.net/ 00:08:29.095 00:08:29.095 00:08:29.095 Suite: pci 00:08:29.095 Test: pci_hook ...[2024-05-15 08:20:15.998857] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 133727 has claimed it 00:08:29.095 EAL: Cannot find device (10000:00:01.0) 00:08:29.095 EAL: Failed to attach device on primary process 00:08:29.095 passed 00:08:29.095 00:08:29.095 Run Summary: Type Total Ran Passed Failed Inactive 00:08:29.095 suites 1 1 n/a 0 0 00:08:29.095 tests 1 1 1 0 0 00:08:29.095 asserts 25 25 25 0 n/a 00:08:29.095 00:08:29.095 Elapsed time = 0.026 seconds 00:08:29.095 00:08:29.095 real 0m0.046s 00:08:29.095 user 0m0.013s 00:08:29.095 sys 0m0.033s 00:08:29.095 08:20:16 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:29.095 08:20:16 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:29.095 ************************************ 00:08:29.095 END TEST env_pci 00:08:29.095 ************************************ 00:08:29.095 08:20:16 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:29.095 08:20:16 env -- env/env.sh@15 -- # uname 00:08:29.095 08:20:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:29.095 08:20:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:29.095 08:20:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:29.095 08:20:16 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:29.095 08:20:16 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:29.095 08:20:16 env -- common/autotest_common.sh@10 -- # set +x 00:08:29.095 ************************************ 00:08:29.095 START TEST env_dpdk_post_init 00:08:29.095 ************************************ 00:08:29.095 08:20:16 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:29.355 EAL: Detected CPU lcores: 96 00:08:29.355 EAL: Detected NUMA nodes: 2 00:08:29.355 EAL: Detected shared linkage of DPDK 00:08:29.355 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:29.355 EAL: Selected IOVA mode 'VA' 00:08:29.355 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.355 EAL: VFIO support initialized 00:08:29.355 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:29.355 EAL: Using IOMMU type 1 (Type 1) 00:08:29.355 EAL: Ignore mapping IO port bar(1) 00:08:29.355 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:08:29.355 EAL: Ignore mapping IO port bar(1) 00:08:29.355 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:08:29.355 EAL: Ignore mapping IO port bar(1) 00:08:29.355 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:08:29.355 EAL: Ignore mapping IO port bar(1) 00:08:29.355 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:08:29.355 EAL: Ignore mapping IO port bar(1) 00:08:29.355 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:08:29.355 EAL: Ignore mapping IO port bar(1) 00:08:29.355 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:08:29.355 EAL: Ignore mapping IO port bar(1) 00:08:29.355 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:08:29.355 EAL: Ignore mapping IO port bar(1) 00:08:29.355 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:08:30.295 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:08:30.295 EAL: Ignore mapping IO port bar(1) 00:08:30.295 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:08:30.295 EAL: Ignore mapping IO port bar(1) 00:08:30.295 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:08:30.295 EAL: Ignore mapping IO port bar(1) 00:08:30.295 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:08:30.295 EAL: Ignore mapping IO port bar(1) 00:08:30.295 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:08:30.295 EAL: Ignore mapping IO port bar(1) 00:08:30.295 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:08:30.295 EAL: Ignore mapping IO port bar(1) 00:08:30.295 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:08:30.295 EAL: Ignore mapping IO port bar(1) 00:08:30.295 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:08:30.295 EAL: Ignore mapping IO port bar(1) 00:08:30.295 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:08:33.584 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:08:33.584 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:08:33.584 Starting DPDK initialization... 00:08:33.584 Starting SPDK post initialization... 00:08:33.584 SPDK NVMe probe 00:08:33.584 Attaching to 0000:5e:00.0 00:08:33.584 Attached to 0000:5e:00.0 00:08:33.584 Cleaning up... 00:08:33.584 00:08:33.584 real 0m4.369s 00:08:33.584 user 0m3.294s 00:08:33.584 sys 0m0.143s 00:08:33.584 08:20:20 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:33.584 08:20:20 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:33.584 ************************************ 00:08:33.584 END TEST env_dpdk_post_init 00:08:33.584 ************************************ 00:08:33.584 08:20:20 env -- env/env.sh@26 -- # uname 00:08:33.584 08:20:20 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:33.584 08:20:20 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:33.584 08:20:20 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:33.584 08:20:20 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:33.584 08:20:20 env -- common/autotest_common.sh@10 -- # set +x 00:08:33.584 ************************************ 00:08:33.584 START TEST env_mem_callbacks 00:08:33.584 ************************************ 00:08:33.584 08:20:20 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:33.584 EAL: Detected CPU lcores: 96 00:08:33.584 EAL: Detected NUMA nodes: 2 00:08:33.584 EAL: Detected shared linkage of DPDK 00:08:33.584 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:33.584 EAL: Selected IOVA mode 'VA' 00:08:33.584 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.584 EAL: VFIO support initialized 00:08:33.584 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:33.584 00:08:33.584 00:08:33.584 CUnit - A unit testing framework for C - Version 2.1-3 00:08:33.584 http://cunit.sourceforge.net/ 00:08:33.584 00:08:33.584 00:08:33.584 Suite: memory 00:08:33.584 Test: test ... 00:08:33.584 register 0x200000200000 2097152 00:08:33.584 malloc 3145728 00:08:33.844 register 0x200000400000 4194304 00:08:33.844 buf 0x200000500000 len 3145728 PASSED 00:08:33.844 malloc 64 00:08:33.844 buf 0x2000004fff40 len 64 PASSED 00:08:33.844 malloc 4194304 00:08:33.844 register 0x200000800000 6291456 00:08:33.844 buf 0x200000a00000 len 4194304 PASSED 00:08:33.844 free 0x200000500000 3145728 00:08:33.844 free 0x2000004fff40 64 00:08:33.844 unregister 0x200000400000 4194304 PASSED 00:08:33.844 free 0x200000a00000 4194304 00:08:33.844 unregister 0x200000800000 6291456 PASSED 00:08:33.844 malloc 8388608 00:08:33.844 register 0x200000400000 10485760 00:08:33.844 buf 0x200000600000 len 8388608 PASSED 00:08:33.844 free 0x200000600000 8388608 00:08:33.844 unregister 0x200000400000 10485760 PASSED 00:08:33.844 passed 00:08:33.844 00:08:33.844 Run Summary: Type Total Ran Passed Failed Inactive 00:08:33.844 suites 1 1 n/a 0 0 00:08:33.844 tests 1 1 1 0 0 00:08:33.844 asserts 15 15 15 0 n/a 00:08:33.844 00:08:33.844 Elapsed time = 0.008 seconds 00:08:33.844 00:08:33.844 real 0m0.059s 00:08:33.844 user 0m0.021s 00:08:33.844 sys 0m0.038s 00:08:33.844 08:20:20 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:33.844 08:20:20 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:33.844 ************************************ 00:08:33.844 END TEST env_mem_callbacks 00:08:33.844 ************************************ 00:08:33.844 00:08:33.844 real 0m6.178s 00:08:33.844 user 0m4.300s 00:08:33.844 sys 0m0.933s 00:08:33.844 08:20:20 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:33.844 08:20:20 env -- common/autotest_common.sh@10 -- # set +x 00:08:33.844 ************************************ 00:08:33.844 END TEST env 00:08:33.844 ************************************ 00:08:33.844 08:20:20 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:33.844 08:20:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:33.844 08:20:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:33.844 08:20:20 -- common/autotest_common.sh@10 -- # set +x 00:08:33.844 ************************************ 00:08:33.844 START TEST rpc 00:08:33.844 ************************************ 00:08:33.844 08:20:20 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:33.844 * Looking for test storage... 00:08:33.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:33.844 08:20:20 rpc -- rpc/rpc.sh@65 -- # spdk_pid=134553 00:08:33.844 08:20:20 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:33.844 08:20:20 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:33.844 08:20:20 rpc -- rpc/rpc.sh@67 -- # waitforlisten 134553 00:08:33.844 08:20:20 rpc -- common/autotest_common.sh@827 -- # '[' -z 134553 ']' 00:08:33.844 08:20:20 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.844 08:20:20 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:33.844 08:20:20 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.844 08:20:20 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:33.844 08:20:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.104 [2024-05-15 08:20:20.871205] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:08:34.104 [2024-05-15 08:20:20.871253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134553 ] 00:08:34.104 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.104 [2024-05-15 08:20:20.938327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.104 [2024-05-15 08:20:21.010784] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:34.104 [2024-05-15 08:20:21.010840] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 134553' to capture a snapshot of events at runtime. 00:08:34.104 [2024-05-15 08:20:21.010847] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.104 [2024-05-15 08:20:21.010856] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.104 [2024-05-15 08:20:21.010861] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid134553 for offline analysis/debug. 00:08:34.104 [2024-05-15 08:20:21.010881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.673 08:20:21 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:34.673 08:20:21 rpc -- common/autotest_common.sh@860 -- # return 0 00:08:34.673 08:20:21 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:34.673 08:20:21 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:34.673 08:20:21 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:34.673 08:20:21 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:34.673 08:20:21 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:34.673 08:20:21 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:34.673 08:20:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.932 ************************************ 00:08:34.932 START TEST rpc_integrity 00:08:34.932 ************************************ 00:08:34.932 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:08:34.932 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:34.932 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.932 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.932 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.932 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:34.932 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:34.932 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:34.932 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:34.932 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.932 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.932 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.932 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:34.932 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:34.932 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.932 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.932 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.932 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:34.932 { 00:08:34.932 "name": "Malloc0", 00:08:34.932 "aliases": [ 00:08:34.932 "21cc3985-f767-499a-a7a6-5f5bb006e4c6" 00:08:34.932 ], 00:08:34.932 "product_name": "Malloc disk", 00:08:34.932 "block_size": 512, 00:08:34.932 "num_blocks": 16384, 00:08:34.932 "uuid": "21cc3985-f767-499a-a7a6-5f5bb006e4c6", 00:08:34.932 "assigned_rate_limits": { 00:08:34.932 "rw_ios_per_sec": 0, 00:08:34.932 "rw_mbytes_per_sec": 0, 00:08:34.932 "r_mbytes_per_sec": 0, 00:08:34.932 "w_mbytes_per_sec": 0 00:08:34.932 }, 00:08:34.932 "claimed": false, 00:08:34.932 "zoned": false, 00:08:34.932 "supported_io_types": { 00:08:34.932 "read": true, 00:08:34.932 "write": true, 00:08:34.932 "unmap": true, 00:08:34.932 "write_zeroes": true, 00:08:34.932 "flush": true, 00:08:34.932 "reset": true, 00:08:34.932 "compare": false, 00:08:34.932 "compare_and_write": false, 00:08:34.932 "abort": true, 00:08:34.932 "nvme_admin": false, 00:08:34.932 "nvme_io": false 00:08:34.932 }, 00:08:34.932 "memory_domains": [ 00:08:34.932 { 00:08:34.932 "dma_device_id": "system", 00:08:34.932 "dma_device_type": 1 00:08:34.932 }, 00:08:34.932 { 00:08:34.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.932 "dma_device_type": 2 00:08:34.932 } 00:08:34.932 ], 00:08:34.932 "driver_specific": {} 00:08:34.932 } 00:08:34.932 ]' 00:08:34.932 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:34.932 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:34.933 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:34.933 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.933 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.933 [2024-05-15 08:20:21.854823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:34.933 [2024-05-15 08:20:21.854852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.933 [2024-05-15 08:20:21.854864] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ca11a0 00:08:34.933 [2024-05-15 08:20:21.854871] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.933 [2024-05-15 08:20:21.855955] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.933 [2024-05-15 08:20:21.855978] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:34.933 Passthru0 00:08:34.933 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.933 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:34.933 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.933 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.933 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.933 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:34.933 { 00:08:34.933 "name": "Malloc0", 00:08:34.933 "aliases": [ 00:08:34.933 "21cc3985-f767-499a-a7a6-5f5bb006e4c6" 00:08:34.933 ], 00:08:34.933 "product_name": "Malloc disk", 00:08:34.933 "block_size": 512, 00:08:34.933 "num_blocks": 16384, 00:08:34.933 "uuid": "21cc3985-f767-499a-a7a6-5f5bb006e4c6", 00:08:34.933 "assigned_rate_limits": { 00:08:34.933 "rw_ios_per_sec": 0, 00:08:34.933 "rw_mbytes_per_sec": 0, 00:08:34.933 "r_mbytes_per_sec": 0, 00:08:34.933 "w_mbytes_per_sec": 0 00:08:34.933 }, 00:08:34.933 "claimed": true, 00:08:34.933 "claim_type": "exclusive_write", 00:08:34.933 "zoned": false, 00:08:34.933 "supported_io_types": { 00:08:34.933 "read": true, 00:08:34.933 "write": true, 00:08:34.933 "unmap": true, 00:08:34.933 "write_zeroes": true, 00:08:34.933 "flush": true, 00:08:34.933 "reset": true, 00:08:34.933 "compare": false, 00:08:34.933 "compare_and_write": false, 00:08:34.933 "abort": true, 00:08:34.933 "nvme_admin": false, 00:08:34.933 "nvme_io": false 00:08:34.933 }, 00:08:34.933 "memory_domains": [ 00:08:34.933 { 00:08:34.933 "dma_device_id": "system", 00:08:34.933 "dma_device_type": 1 00:08:34.933 }, 00:08:34.933 { 00:08:34.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.933 "dma_device_type": 2 00:08:34.933 } 00:08:34.933 ], 00:08:34.933 "driver_specific": {} 00:08:34.933 }, 00:08:34.933 { 00:08:34.933 "name": "Passthru0", 00:08:34.933 "aliases": [ 00:08:34.933 "49d80c05-bd5f-59a9-9e16-aa8edbe6a7dd" 00:08:34.933 ], 00:08:34.933 "product_name": "passthru", 00:08:34.933 "block_size": 512, 00:08:34.933 "num_blocks": 16384, 00:08:34.933 "uuid": "49d80c05-bd5f-59a9-9e16-aa8edbe6a7dd", 00:08:34.933 "assigned_rate_limits": { 00:08:34.933 "rw_ios_per_sec": 0, 00:08:34.933 "rw_mbytes_per_sec": 0, 00:08:34.933 "r_mbytes_per_sec": 0, 00:08:34.933 "w_mbytes_per_sec": 0 00:08:34.933 }, 00:08:34.933 "claimed": false, 00:08:34.933 "zoned": false, 00:08:34.933 "supported_io_types": { 00:08:34.933 "read": true, 00:08:34.933 "write": true, 00:08:34.933 "unmap": true, 00:08:34.933 "write_zeroes": true, 00:08:34.933 "flush": true, 00:08:34.933 "reset": true, 00:08:34.933 "compare": false, 00:08:34.933 "compare_and_write": false, 00:08:34.933 "abort": true, 00:08:34.933 "nvme_admin": false, 00:08:34.933 "nvme_io": false 00:08:34.933 }, 00:08:34.933 "memory_domains": [ 00:08:34.933 { 00:08:34.933 "dma_device_id": "system", 00:08:34.933 "dma_device_type": 1 00:08:34.933 }, 00:08:34.933 { 00:08:34.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.933 "dma_device_type": 2 00:08:34.933 } 00:08:34.933 ], 00:08:34.933 "driver_specific": { 00:08:34.933 "passthru": { 00:08:34.933 "name": "Passthru0", 00:08:34.933 "base_bdev_name": "Malloc0" 00:08:34.933 } 00:08:34.933 } 00:08:34.933 } 00:08:34.933 ]' 00:08:34.933 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:34.933 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:34.933 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:34.933 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.933 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.933 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.933 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:34.933 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.933 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.933 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.933 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:34.933 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.933 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.933 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.933 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:34.933 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:35.193 08:20:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:35.193 00:08:35.193 real 0m0.270s 00:08:35.193 user 0m0.164s 00:08:35.193 sys 0m0.037s 00:08:35.193 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:35.193 08:20:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.193 ************************************ 00:08:35.193 END TEST rpc_integrity 00:08:35.193 ************************************ 00:08:35.193 08:20:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:35.193 08:20:22 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:35.193 08:20:22 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:35.193 08:20:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.193 ************************************ 00:08:35.193 START TEST rpc_plugins 00:08:35.193 ************************************ 00:08:35.193 08:20:22 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:08:35.193 08:20:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:35.193 08:20:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.193 08:20:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:35.193 08:20:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.193 08:20:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:35.193 08:20:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:35.193 08:20:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.193 08:20:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:35.193 08:20:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.193 08:20:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:35.193 { 00:08:35.193 "name": "Malloc1", 00:08:35.193 "aliases": [ 00:08:35.193 "c62f0924-56ee-42bc-bd39-c235f6b8f3cf" 00:08:35.193 ], 00:08:35.193 "product_name": "Malloc disk", 00:08:35.193 "block_size": 4096, 00:08:35.193 "num_blocks": 256, 00:08:35.193 "uuid": "c62f0924-56ee-42bc-bd39-c235f6b8f3cf", 00:08:35.193 "assigned_rate_limits": { 00:08:35.193 "rw_ios_per_sec": 0, 00:08:35.193 "rw_mbytes_per_sec": 0, 00:08:35.193 "r_mbytes_per_sec": 0, 00:08:35.193 "w_mbytes_per_sec": 0 00:08:35.193 }, 00:08:35.193 "claimed": false, 00:08:35.193 "zoned": false, 00:08:35.193 "supported_io_types": { 00:08:35.193 "read": true, 00:08:35.193 "write": true, 00:08:35.193 "unmap": true, 00:08:35.193 "write_zeroes": true, 00:08:35.193 "flush": true, 00:08:35.193 "reset": true, 00:08:35.193 "compare": false, 00:08:35.193 "compare_and_write": false, 00:08:35.193 "abort": true, 00:08:35.193 "nvme_admin": false, 00:08:35.193 "nvme_io": false 00:08:35.193 }, 00:08:35.193 "memory_domains": [ 00:08:35.193 { 00:08:35.193 "dma_device_id": "system", 00:08:35.193 "dma_device_type": 1 00:08:35.193 }, 00:08:35.193 { 00:08:35.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.193 "dma_device_type": 2 00:08:35.193 } 00:08:35.193 ], 00:08:35.193 "driver_specific": {} 00:08:35.193 } 00:08:35.193 ]' 00:08:35.193 08:20:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:35.193 08:20:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:35.193 08:20:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:35.193 08:20:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.193 08:20:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:35.193 08:20:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.193 08:20:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:35.193 08:20:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.193 08:20:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:35.193 08:20:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.193 08:20:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:35.193 08:20:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:35.193 08:20:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:35.193 00:08:35.193 real 0m0.132s 00:08:35.193 user 0m0.085s 00:08:35.193 sys 0m0.014s 00:08:35.193 08:20:22 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:35.193 08:20:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:35.193 ************************************ 00:08:35.193 END TEST rpc_plugins 00:08:35.193 ************************************ 00:08:35.452 08:20:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:35.452 08:20:22 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:35.452 08:20:22 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:35.452 08:20:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.452 ************************************ 00:08:35.452 START TEST rpc_trace_cmd_test 00:08:35.452 ************************************ 00:08:35.452 08:20:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:08:35.452 08:20:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:35.452 08:20:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:35.452 08:20:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.452 08:20:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.452 08:20:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.453 08:20:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:35.453 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid134553", 00:08:35.453 "tpoint_group_mask": "0x8", 00:08:35.453 "iscsi_conn": { 00:08:35.453 "mask": "0x2", 00:08:35.453 "tpoint_mask": "0x0" 00:08:35.453 }, 00:08:35.453 "scsi": { 00:08:35.453 "mask": "0x4", 00:08:35.453 "tpoint_mask": "0x0" 00:08:35.453 }, 00:08:35.453 "bdev": { 00:08:35.453 "mask": "0x8", 00:08:35.453 "tpoint_mask": "0xffffffffffffffff" 00:08:35.453 }, 00:08:35.453 "nvmf_rdma": { 00:08:35.453 "mask": "0x10", 00:08:35.453 "tpoint_mask": "0x0" 00:08:35.453 }, 00:08:35.453 "nvmf_tcp": { 00:08:35.453 "mask": "0x20", 00:08:35.453 "tpoint_mask": "0x0" 00:08:35.453 }, 00:08:35.453 "ftl": { 00:08:35.453 "mask": "0x40", 00:08:35.453 "tpoint_mask": "0x0" 00:08:35.453 }, 00:08:35.453 "blobfs": { 00:08:35.453 "mask": "0x80", 00:08:35.453 "tpoint_mask": "0x0" 00:08:35.453 }, 00:08:35.453 "dsa": { 00:08:35.453 "mask": "0x200", 00:08:35.453 "tpoint_mask": "0x0" 00:08:35.453 }, 00:08:35.453 "thread": { 00:08:35.453 "mask": "0x400", 00:08:35.453 "tpoint_mask": "0x0" 00:08:35.453 }, 00:08:35.453 "nvme_pcie": { 00:08:35.453 "mask": "0x800", 00:08:35.453 "tpoint_mask": "0x0" 00:08:35.453 }, 00:08:35.453 "iaa": { 00:08:35.453 "mask": "0x1000", 00:08:35.453 "tpoint_mask": "0x0" 00:08:35.453 }, 00:08:35.453 "nvme_tcp": { 00:08:35.453 "mask": "0x2000", 00:08:35.453 "tpoint_mask": "0x0" 00:08:35.453 }, 00:08:35.453 "bdev_nvme": { 00:08:35.453 "mask": "0x4000", 00:08:35.453 "tpoint_mask": "0x0" 00:08:35.453 }, 00:08:35.453 "sock": { 00:08:35.453 "mask": "0x8000", 00:08:35.453 "tpoint_mask": "0x0" 00:08:35.453 } 00:08:35.453 }' 00:08:35.453 08:20:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:35.453 08:20:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:08:35.453 08:20:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:35.453 08:20:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:35.453 08:20:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:35.453 08:20:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:35.453 08:20:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:35.453 08:20:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:35.453 08:20:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:35.712 08:20:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:35.712 00:08:35.712 real 0m0.224s 00:08:35.712 user 0m0.186s 00:08:35.712 sys 0m0.029s 00:08:35.712 08:20:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:35.712 08:20:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.712 ************************************ 00:08:35.712 END TEST rpc_trace_cmd_test 00:08:35.712 ************************************ 00:08:35.712 08:20:22 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:35.712 08:20:22 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:35.712 08:20:22 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:35.712 08:20:22 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:35.712 08:20:22 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:35.712 08:20:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.712 ************************************ 00:08:35.712 START TEST rpc_daemon_integrity 00:08:35.712 ************************************ 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:35.712 { 00:08:35.712 "name": "Malloc2", 00:08:35.712 "aliases": [ 00:08:35.712 "6686511d-3261-4342-87f8-fdd6c775a7ee" 00:08:35.712 ], 00:08:35.712 "product_name": "Malloc disk", 00:08:35.712 "block_size": 512, 00:08:35.712 "num_blocks": 16384, 00:08:35.712 "uuid": "6686511d-3261-4342-87f8-fdd6c775a7ee", 00:08:35.712 "assigned_rate_limits": { 00:08:35.712 "rw_ios_per_sec": 0, 00:08:35.712 "rw_mbytes_per_sec": 0, 00:08:35.712 "r_mbytes_per_sec": 0, 00:08:35.712 "w_mbytes_per_sec": 0 00:08:35.712 }, 00:08:35.712 "claimed": false, 00:08:35.712 "zoned": false, 00:08:35.712 "supported_io_types": { 00:08:35.712 "read": true, 00:08:35.712 "write": true, 00:08:35.712 "unmap": true, 00:08:35.712 "write_zeroes": true, 00:08:35.712 "flush": true, 00:08:35.712 "reset": true, 00:08:35.712 "compare": false, 00:08:35.712 "compare_and_write": false, 00:08:35.712 "abort": true, 00:08:35.712 "nvme_admin": false, 00:08:35.712 "nvme_io": false 00:08:35.712 }, 00:08:35.712 "memory_domains": [ 00:08:35.712 { 00:08:35.712 "dma_device_id": "system", 00:08:35.712 "dma_device_type": 1 00:08:35.712 }, 00:08:35.712 { 00:08:35.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.712 "dma_device_type": 2 00:08:35.712 } 00:08:35.712 ], 00:08:35.712 "driver_specific": {} 00:08:35.712 } 00:08:35.712 ]' 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.712 [2024-05-15 08:20:22.689109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:35.712 [2024-05-15 08:20:22.689138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.712 [2024-05-15 08:20:22.689151] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ca2560 00:08:35.712 [2024-05-15 08:20:22.689157] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.712 [2024-05-15 08:20:22.690147] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.712 [2024-05-15 08:20:22.690183] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:35.712 Passthru0 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.712 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:35.712 { 00:08:35.712 "name": "Malloc2", 00:08:35.712 "aliases": [ 00:08:35.712 "6686511d-3261-4342-87f8-fdd6c775a7ee" 00:08:35.712 ], 00:08:35.712 "product_name": "Malloc disk", 00:08:35.712 "block_size": 512, 00:08:35.712 "num_blocks": 16384, 00:08:35.712 "uuid": "6686511d-3261-4342-87f8-fdd6c775a7ee", 00:08:35.712 "assigned_rate_limits": { 00:08:35.712 "rw_ios_per_sec": 0, 00:08:35.712 "rw_mbytes_per_sec": 0, 00:08:35.712 "r_mbytes_per_sec": 0, 00:08:35.712 "w_mbytes_per_sec": 0 00:08:35.712 }, 00:08:35.712 "claimed": true, 00:08:35.712 "claim_type": "exclusive_write", 00:08:35.712 "zoned": false, 00:08:35.712 "supported_io_types": { 00:08:35.712 "read": true, 00:08:35.712 "write": true, 00:08:35.712 "unmap": true, 00:08:35.712 "write_zeroes": true, 00:08:35.712 "flush": true, 00:08:35.712 "reset": true, 00:08:35.712 "compare": false, 00:08:35.712 "compare_and_write": false, 00:08:35.712 "abort": true, 00:08:35.712 "nvme_admin": false, 00:08:35.712 "nvme_io": false 00:08:35.712 }, 00:08:35.712 "memory_domains": [ 00:08:35.712 { 00:08:35.712 "dma_device_id": "system", 00:08:35.712 "dma_device_type": 1 00:08:35.712 }, 00:08:35.712 { 00:08:35.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.712 "dma_device_type": 2 00:08:35.712 } 00:08:35.712 ], 00:08:35.712 "driver_specific": {} 00:08:35.712 }, 00:08:35.712 { 00:08:35.712 "name": "Passthru0", 00:08:35.712 "aliases": [ 00:08:35.712 "25be0e36-8119-563f-8602-7e3a7fd9f4b8" 00:08:35.712 ], 00:08:35.712 "product_name": "passthru", 00:08:35.712 "block_size": 512, 00:08:35.712 "num_blocks": 16384, 00:08:35.712 "uuid": "25be0e36-8119-563f-8602-7e3a7fd9f4b8", 00:08:35.712 "assigned_rate_limits": { 00:08:35.712 "rw_ios_per_sec": 0, 00:08:35.712 "rw_mbytes_per_sec": 0, 00:08:35.712 "r_mbytes_per_sec": 0, 00:08:35.712 "w_mbytes_per_sec": 0 00:08:35.712 }, 00:08:35.712 "claimed": false, 00:08:35.712 "zoned": false, 00:08:35.712 "supported_io_types": { 00:08:35.712 "read": true, 00:08:35.712 "write": true, 00:08:35.713 "unmap": true, 00:08:35.713 "write_zeroes": true, 00:08:35.713 "flush": true, 00:08:35.713 "reset": true, 00:08:35.713 "compare": false, 00:08:35.713 "compare_and_write": false, 00:08:35.713 "abort": true, 00:08:35.713 "nvme_admin": false, 00:08:35.713 "nvme_io": false 00:08:35.713 }, 00:08:35.713 "memory_domains": [ 00:08:35.713 { 00:08:35.713 "dma_device_id": "system", 00:08:35.713 "dma_device_type": 1 00:08:35.713 }, 00:08:35.713 { 00:08:35.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.713 "dma_device_type": 2 00:08:35.713 } 00:08:35.713 ], 00:08:35.713 "driver_specific": { 00:08:35.713 "passthru": { 00:08:35.713 "name": "Passthru0", 00:08:35.713 "base_bdev_name": "Malloc2" 00:08:35.713 } 00:08:35.713 } 00:08:35.713 } 00:08:35.713 ]' 00:08:35.713 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:35.972 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:35.972 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:35.972 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.972 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.972 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.972 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:35.972 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.972 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.972 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.972 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:35.972 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.972 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.972 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.972 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:35.972 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:35.972 08:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:35.972 00:08:35.972 real 0m0.277s 00:08:35.972 user 0m0.175s 00:08:35.972 sys 0m0.038s 00:08:35.972 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:35.972 08:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:35.972 ************************************ 00:08:35.972 END TEST rpc_daemon_integrity 00:08:35.972 ************************************ 00:08:35.972 08:20:22 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:35.972 08:20:22 rpc -- rpc/rpc.sh@84 -- # killprocess 134553 00:08:35.972 08:20:22 rpc -- common/autotest_common.sh@946 -- # '[' -z 134553 ']' 00:08:35.972 08:20:22 rpc -- common/autotest_common.sh@950 -- # kill -0 134553 00:08:35.972 08:20:22 rpc -- common/autotest_common.sh@951 -- # uname 00:08:35.972 08:20:22 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:35.972 08:20:22 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 134553 00:08:35.972 08:20:22 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:35.972 08:20:22 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:35.972 08:20:22 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 134553' 00:08:35.972 killing process with pid 134553 00:08:35.972 08:20:22 rpc -- common/autotest_common.sh@965 -- # kill 134553 00:08:35.972 08:20:22 rpc -- common/autotest_common.sh@970 -- # wait 134553 00:08:36.231 00:08:36.231 real 0m2.517s 00:08:36.231 user 0m3.226s 00:08:36.231 sys 0m0.687s 00:08:36.231 08:20:23 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:36.231 08:20:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.231 ************************************ 00:08:36.231 END TEST rpc 00:08:36.231 ************************************ 00:08:36.491 08:20:23 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:36.491 08:20:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:36.491 08:20:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:36.491 08:20:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.491 ************************************ 00:08:36.491 START TEST skip_rpc 00:08:36.491 ************************************ 00:08:36.491 08:20:23 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:36.491 * Looking for test storage... 00:08:36.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:36.491 08:20:23 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:36.491 08:20:23 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:36.491 08:20:23 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:36.491 08:20:23 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:36.491 08:20:23 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:36.491 08:20:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.491 ************************************ 00:08:36.491 START TEST skip_rpc 00:08:36.491 ************************************ 00:08:36.491 08:20:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:08:36.491 08:20:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=135184 00:08:36.491 08:20:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:36.491 08:20:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:36.491 08:20:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:36.491 [2024-05-15 08:20:23.492907] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:08:36.491 [2024-05-15 08:20:23.492946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135184 ] 00:08:36.750 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.750 [2024-05-15 08:20:23.561009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.750 [2024-05-15 08:20:23.634460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 135184 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 135184 ']' 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 135184 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 135184 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 135184' 00:08:42.021 killing process with pid 135184 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 135184 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 135184 00:08:42.021 00:08:42.021 real 0m5.393s 00:08:42.021 user 0m5.149s 00:08:42.021 sys 0m0.269s 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:42.021 08:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.021 ************************************ 00:08:42.021 END TEST skip_rpc 00:08:42.021 ************************************ 00:08:42.021 08:20:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:42.021 08:20:28 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:42.021 08:20:28 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:42.021 08:20:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.021 ************************************ 00:08:42.021 START TEST skip_rpc_with_json 00:08:42.021 ************************************ 00:08:42.021 08:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:08:42.021 08:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:42.021 08:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=136132 00:08:42.021 08:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:42.021 08:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:42.021 08:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 136132 00:08:42.021 08:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 136132 ']' 00:08:42.021 08:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.021 08:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:42.021 08:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.021 08:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:42.021 08:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:42.021 [2024-05-15 08:20:28.960589] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:08:42.021 [2024-05-15 08:20:28.960629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136132 ] 00:08:42.021 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.021 [2024-05-15 08:20:29.029501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.279 [2024-05-15 08:20:29.110194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.845 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:42.845 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:08:42.845 08:20:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:42.845 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.845 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:42.845 [2024-05-15 08:20:29.771445] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:42.845 request: 00:08:42.845 { 00:08:42.845 "trtype": "tcp", 00:08:42.846 "method": "nvmf_get_transports", 00:08:42.846 "req_id": 1 00:08:42.846 } 00:08:42.846 Got JSON-RPC error response 00:08:42.846 response: 00:08:42.846 { 00:08:42.846 "code": -19, 00:08:42.846 "message": "No such device" 00:08:42.846 } 00:08:42.846 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:42.846 08:20:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:42.846 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.846 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:42.846 [2024-05-15 08:20:29.783545] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.846 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.846 08:20:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:42.846 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.846 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:43.105 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.105 08:20:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:43.105 { 00:08:43.105 "subsystems": [ 00:08:43.105 { 00:08:43.105 "subsystem": "vfio_user_target", 00:08:43.105 "config": null 00:08:43.105 }, 00:08:43.105 { 00:08:43.105 "subsystem": "keyring", 00:08:43.105 "config": [] 00:08:43.105 }, 00:08:43.105 { 00:08:43.105 "subsystem": "iobuf", 00:08:43.105 "config": [ 00:08:43.105 { 00:08:43.105 "method": "iobuf_set_options", 00:08:43.105 "params": { 00:08:43.105 "small_pool_count": 8192, 00:08:43.105 "large_pool_count": 1024, 00:08:43.105 "small_bufsize": 8192, 00:08:43.105 "large_bufsize": 135168 00:08:43.105 } 00:08:43.105 } 00:08:43.105 ] 00:08:43.105 }, 00:08:43.105 { 00:08:43.105 "subsystem": "sock", 00:08:43.105 "config": [ 00:08:43.105 { 00:08:43.105 "method": "sock_impl_set_options", 00:08:43.105 "params": { 00:08:43.105 "impl_name": "posix", 00:08:43.105 "recv_buf_size": 2097152, 00:08:43.105 "send_buf_size": 2097152, 00:08:43.105 "enable_recv_pipe": true, 00:08:43.105 "enable_quickack": false, 00:08:43.105 "enable_placement_id": 0, 00:08:43.105 "enable_zerocopy_send_server": true, 00:08:43.105 "enable_zerocopy_send_client": false, 00:08:43.105 "zerocopy_threshold": 0, 00:08:43.105 "tls_version": 0, 00:08:43.105 "enable_ktls": false 00:08:43.105 } 00:08:43.105 }, 00:08:43.105 { 00:08:43.105 "method": "sock_impl_set_options", 00:08:43.105 "params": { 00:08:43.105 "impl_name": "ssl", 00:08:43.105 "recv_buf_size": 4096, 00:08:43.105 "send_buf_size": 4096, 00:08:43.105 "enable_recv_pipe": true, 00:08:43.105 "enable_quickack": false, 00:08:43.105 "enable_placement_id": 0, 00:08:43.105 "enable_zerocopy_send_server": true, 00:08:43.105 "enable_zerocopy_send_client": false, 00:08:43.105 "zerocopy_threshold": 0, 00:08:43.105 "tls_version": 0, 00:08:43.105 "enable_ktls": false 00:08:43.105 } 00:08:43.105 } 00:08:43.105 ] 00:08:43.105 }, 00:08:43.105 { 00:08:43.105 "subsystem": "vmd", 00:08:43.105 "config": [] 00:08:43.105 }, 00:08:43.105 { 00:08:43.105 "subsystem": "accel", 00:08:43.105 "config": [ 00:08:43.105 { 00:08:43.105 "method": "accel_set_options", 00:08:43.105 "params": { 00:08:43.105 "small_cache_size": 128, 00:08:43.105 "large_cache_size": 16, 00:08:43.105 "task_count": 2048, 00:08:43.105 "sequence_count": 2048, 00:08:43.105 "buf_count": 2048 00:08:43.105 } 00:08:43.105 } 00:08:43.105 ] 00:08:43.105 }, 00:08:43.105 { 00:08:43.105 "subsystem": "bdev", 00:08:43.105 "config": [ 00:08:43.105 { 00:08:43.105 "method": "bdev_set_options", 00:08:43.105 "params": { 00:08:43.105 "bdev_io_pool_size": 65535, 00:08:43.105 "bdev_io_cache_size": 256, 00:08:43.105 "bdev_auto_examine": true, 00:08:43.105 "iobuf_small_cache_size": 128, 00:08:43.105 "iobuf_large_cache_size": 16 00:08:43.105 } 00:08:43.105 }, 00:08:43.105 { 00:08:43.105 "method": "bdev_raid_set_options", 00:08:43.105 "params": { 00:08:43.105 "process_window_size_kb": 1024 00:08:43.105 } 00:08:43.105 }, 00:08:43.105 { 00:08:43.105 "method": "bdev_iscsi_set_options", 00:08:43.105 "params": { 00:08:43.105 "timeout_sec": 30 00:08:43.105 } 00:08:43.105 }, 00:08:43.105 { 00:08:43.105 "method": "bdev_nvme_set_options", 00:08:43.105 "params": { 00:08:43.105 "action_on_timeout": "none", 00:08:43.105 "timeout_us": 0, 00:08:43.105 "timeout_admin_us": 0, 00:08:43.105 "keep_alive_timeout_ms": 10000, 00:08:43.105 "arbitration_burst": 0, 00:08:43.105 "low_priority_weight": 0, 00:08:43.105 "medium_priority_weight": 0, 00:08:43.105 "high_priority_weight": 0, 00:08:43.105 "nvme_adminq_poll_period_us": 10000, 00:08:43.105 "nvme_ioq_poll_period_us": 0, 00:08:43.105 "io_queue_requests": 0, 00:08:43.105 "delay_cmd_submit": true, 00:08:43.105 "transport_retry_count": 4, 00:08:43.105 "bdev_retry_count": 3, 00:08:43.105 "transport_ack_timeout": 0, 00:08:43.105 "ctrlr_loss_timeout_sec": 0, 00:08:43.105 "reconnect_delay_sec": 0, 00:08:43.105 "fast_io_fail_timeout_sec": 0, 00:08:43.105 "disable_auto_failback": false, 00:08:43.105 "generate_uuids": false, 00:08:43.105 "transport_tos": 0, 00:08:43.105 "nvme_error_stat": false, 00:08:43.105 "rdma_srq_size": 0, 00:08:43.105 "io_path_stat": false, 00:08:43.105 "allow_accel_sequence": false, 00:08:43.105 "rdma_max_cq_size": 0, 00:08:43.105 "rdma_cm_event_timeout_ms": 0, 00:08:43.105 "dhchap_digests": [ 00:08:43.105 "sha256", 00:08:43.105 "sha384", 00:08:43.105 "sha512" 00:08:43.105 ], 00:08:43.105 "dhchap_dhgroups": [ 00:08:43.105 "null", 00:08:43.105 "ffdhe2048", 00:08:43.105 "ffdhe3072", 00:08:43.105 "ffdhe4096", 00:08:43.105 "ffdhe6144", 00:08:43.105 "ffdhe8192" 00:08:43.105 ] 00:08:43.105 } 00:08:43.105 }, 00:08:43.105 { 00:08:43.105 "method": "bdev_nvme_set_hotplug", 00:08:43.105 "params": { 00:08:43.105 "period_us": 100000, 00:08:43.105 "enable": false 00:08:43.105 } 00:08:43.105 }, 00:08:43.105 { 00:08:43.105 "method": "bdev_wait_for_examine" 00:08:43.105 } 00:08:43.105 ] 00:08:43.105 }, 00:08:43.105 { 00:08:43.105 "subsystem": "scsi", 00:08:43.105 "config": null 00:08:43.105 }, 00:08:43.105 { 00:08:43.105 "subsystem": "scheduler", 00:08:43.105 "config": [ 00:08:43.105 { 00:08:43.105 "method": "framework_set_scheduler", 00:08:43.105 "params": { 00:08:43.105 "name": "static" 00:08:43.105 } 00:08:43.105 } 00:08:43.105 ] 00:08:43.105 }, 00:08:43.105 { 00:08:43.105 "subsystem": "vhost_scsi", 00:08:43.105 "config": [] 00:08:43.105 }, 00:08:43.105 { 00:08:43.105 "subsystem": "vhost_blk", 00:08:43.105 "config": [] 00:08:43.105 }, 00:08:43.105 { 00:08:43.106 "subsystem": "ublk", 00:08:43.106 "config": [] 00:08:43.106 }, 00:08:43.106 { 00:08:43.106 "subsystem": "nbd", 00:08:43.106 "config": [] 00:08:43.106 }, 00:08:43.106 { 00:08:43.106 "subsystem": "nvmf", 00:08:43.106 "config": [ 00:08:43.106 { 00:08:43.106 "method": "nvmf_set_config", 00:08:43.106 "params": { 00:08:43.106 "discovery_filter": "match_any", 00:08:43.106 "admin_cmd_passthru": { 00:08:43.106 "identify_ctrlr": false 00:08:43.106 } 00:08:43.106 } 00:08:43.106 }, 00:08:43.106 { 00:08:43.106 "method": "nvmf_set_max_subsystems", 00:08:43.106 "params": { 00:08:43.106 "max_subsystems": 1024 00:08:43.106 } 00:08:43.106 }, 00:08:43.106 { 00:08:43.106 "method": "nvmf_set_crdt", 00:08:43.106 "params": { 00:08:43.106 "crdt1": 0, 00:08:43.106 "crdt2": 0, 00:08:43.106 "crdt3": 0 00:08:43.106 } 00:08:43.106 }, 00:08:43.106 { 00:08:43.106 "method": "nvmf_create_transport", 00:08:43.106 "params": { 00:08:43.106 "trtype": "TCP", 00:08:43.106 "max_queue_depth": 128, 00:08:43.106 "max_io_qpairs_per_ctrlr": 127, 00:08:43.106 "in_capsule_data_size": 4096, 00:08:43.106 "max_io_size": 131072, 00:08:43.106 "io_unit_size": 131072, 00:08:43.106 "max_aq_depth": 128, 00:08:43.106 "num_shared_buffers": 511, 00:08:43.106 "buf_cache_size": 4294967295, 00:08:43.106 "dif_insert_or_strip": false, 00:08:43.106 "zcopy": false, 00:08:43.106 "c2h_success": true, 00:08:43.106 "sock_priority": 0, 00:08:43.106 "abort_timeout_sec": 1, 00:08:43.106 "ack_timeout": 0, 00:08:43.106 "data_wr_pool_size": 0 00:08:43.106 } 00:08:43.106 } 00:08:43.106 ] 00:08:43.106 }, 00:08:43.106 { 00:08:43.106 "subsystem": "iscsi", 00:08:43.106 "config": [ 00:08:43.106 { 00:08:43.106 "method": "iscsi_set_options", 00:08:43.106 "params": { 00:08:43.106 "node_base": "iqn.2016-06.io.spdk", 00:08:43.106 "max_sessions": 128, 00:08:43.106 "max_connections_per_session": 2, 00:08:43.106 "max_queue_depth": 64, 00:08:43.106 "default_time2wait": 2, 00:08:43.106 "default_time2retain": 20, 00:08:43.106 "first_burst_length": 8192, 00:08:43.106 "immediate_data": true, 00:08:43.106 "allow_duplicated_isid": false, 00:08:43.106 "error_recovery_level": 0, 00:08:43.106 "nop_timeout": 60, 00:08:43.106 "nop_in_interval": 30, 00:08:43.106 "disable_chap": false, 00:08:43.106 "require_chap": false, 00:08:43.106 "mutual_chap": false, 00:08:43.106 "chap_group": 0, 00:08:43.106 "max_large_datain_per_connection": 64, 00:08:43.106 "max_r2t_per_connection": 4, 00:08:43.106 "pdu_pool_size": 36864, 00:08:43.106 "immediate_data_pool_size": 16384, 00:08:43.106 "data_out_pool_size": 2048 00:08:43.106 } 00:08:43.106 } 00:08:43.106 ] 00:08:43.106 } 00:08:43.106 ] 00:08:43.106 } 00:08:43.106 08:20:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:43.106 08:20:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 136132 00:08:43.106 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 136132 ']' 00:08:43.106 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 136132 00:08:43.106 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:08:43.106 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:43.106 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 136132 00:08:43.106 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:43.106 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:43.106 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 136132' 00:08:43.106 killing process with pid 136132 00:08:43.106 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 136132 00:08:43.106 08:20:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 136132 00:08:43.365 08:20:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=136374 00:08:43.365 08:20:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:43.365 08:20:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:48.639 08:20:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 136374 00:08:48.639 08:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 136374 ']' 00:08:48.639 08:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 136374 00:08:48.639 08:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:08:48.639 08:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:48.639 08:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 136374 00:08:48.639 08:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:48.639 08:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:48.639 08:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 136374' 00:08:48.639 killing process with pid 136374 00:08:48.639 08:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 136374 00:08:48.639 08:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 136374 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:48.899 00:08:48.899 real 0m6.803s 00:08:48.899 user 0m6.612s 00:08:48.899 sys 0m0.604s 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:48.899 ************************************ 00:08:48.899 END TEST skip_rpc_with_json 00:08:48.899 ************************************ 00:08:48.899 08:20:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:48.899 08:20:35 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:48.899 08:20:35 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:48.899 08:20:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.899 ************************************ 00:08:48.899 START TEST skip_rpc_with_delay 00:08:48.899 ************************************ 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:48.899 [2024-05-15 08:20:35.837606] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:48.899 [2024-05-15 08:20:35.837669] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:48.899 00:08:48.899 real 0m0.064s 00:08:48.899 user 0m0.043s 00:08:48.899 sys 0m0.021s 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:48.899 08:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:48.899 ************************************ 00:08:48.899 END TEST skip_rpc_with_delay 00:08:48.899 ************************************ 00:08:48.899 08:20:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:48.899 08:20:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:48.899 08:20:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:48.899 08:20:35 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:48.899 08:20:35 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:48.899 08:20:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.158 ************************************ 00:08:49.158 START TEST exit_on_failed_rpc_init 00:08:49.158 ************************************ 00:08:49.158 08:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:08:49.158 08:20:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=137345 00:08:49.158 08:20:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 137345 00:08:49.158 08:20:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:49.158 08:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 137345 ']' 00:08:49.158 08:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.158 08:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:49.158 08:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.158 08:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:49.158 08:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:49.158 [2024-05-15 08:20:35.972058] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:08:49.158 [2024-05-15 08:20:35.972101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137345 ] 00:08:49.158 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.158 [2024-05-15 08:20:36.037348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.158 [2024-05-15 08:20:36.116625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.095 08:20:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:50.095 08:20:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:08:50.095 08:20:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:50.095 08:20:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:50.095 08:20:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:08:50.095 08:20:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:50.095 08:20:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:50.095 08:20:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:50.095 08:20:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:50.095 08:20:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:50.095 08:20:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:50.095 08:20:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:50.095 08:20:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:50.095 08:20:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:50.095 08:20:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:50.095 [2024-05-15 08:20:36.824349] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:08:50.095 [2024-05-15 08:20:36.824395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137578 ] 00:08:50.095 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.095 [2024-05-15 08:20:36.888736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.095 [2024-05-15 08:20:36.960484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.095 [2024-05-15 08:20:36.960550] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:50.095 [2024-05-15 08:20:36.960559] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:50.095 [2024-05-15 08:20:36.960565] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.095 08:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:08:50.095 08:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:50.095 08:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:08:50.095 08:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:08:50.095 08:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:08:50.095 08:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:50.095 08:20:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:50.095 08:20:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 137345 00:08:50.095 08:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 137345 ']' 00:08:50.096 08:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 137345 00:08:50.096 08:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:08:50.096 08:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:50.096 08:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 137345 00:08:50.096 08:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:50.096 08:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:50.096 08:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 137345' 00:08:50.096 killing process with pid 137345 00:08:50.096 08:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 137345 00:08:50.096 08:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 137345 00:08:50.664 00:08:50.664 real 0m1.507s 00:08:50.664 user 0m1.735s 00:08:50.664 sys 0m0.408s 00:08:50.664 08:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:50.664 08:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:50.664 ************************************ 00:08:50.664 END TEST exit_on_failed_rpc_init 00:08:50.664 ************************************ 00:08:50.664 08:20:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:50.664 00:08:50.664 real 0m14.147s 00:08:50.664 user 0m13.687s 00:08:50.664 sys 0m1.549s 00:08:50.664 08:20:37 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:50.664 08:20:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.664 ************************************ 00:08:50.664 END TEST skip_rpc 00:08:50.664 ************************************ 00:08:50.664 08:20:37 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:50.664 08:20:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:50.664 08:20:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:50.664 08:20:37 -- common/autotest_common.sh@10 -- # set +x 00:08:50.664 ************************************ 00:08:50.664 START TEST rpc_client 00:08:50.664 ************************************ 00:08:50.664 08:20:37 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:50.664 * Looking for test storage... 00:08:50.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:08:50.664 08:20:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:50.664 OK 00:08:50.664 08:20:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:50.664 00:08:50.664 real 0m0.113s 00:08:50.664 user 0m0.048s 00:08:50.664 sys 0m0.072s 00:08:50.664 08:20:37 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:50.664 08:20:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:50.664 ************************************ 00:08:50.664 END TEST rpc_client 00:08:50.664 ************************************ 00:08:50.924 08:20:37 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:50.924 08:20:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:50.924 08:20:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:50.924 08:20:37 -- common/autotest_common.sh@10 -- # set +x 00:08:50.924 ************************************ 00:08:50.924 START TEST json_config 00:08:50.924 ************************************ 00:08:50.924 08:20:37 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:50.924 08:20:37 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.924 08:20:37 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.924 08:20:37 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.924 08:20:37 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.924 08:20:37 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.924 08:20:37 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.924 08:20:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.924 08:20:37 json_config -- paths/export.sh@5 -- # export PATH 00:08:50.924 08:20:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@47 -- # : 0 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:50.924 08:20:37 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:50.924 08:20:37 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:50.924 08:20:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:50.924 08:20:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:50.924 08:20:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:50.924 08:20:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:50.924 08:20:37 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:50.924 08:20:37 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:50.924 08:20:37 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:50.924 08:20:37 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:50.924 08:20:37 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:50.924 08:20:37 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:50.924 08:20:37 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:08:50.924 08:20:37 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:50.925 08:20:37 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:50.925 08:20:37 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:50.925 08:20:37 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:08:50.925 INFO: JSON configuration test init 00:08:50.925 08:20:37 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:08:50.925 08:20:37 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:08:50.925 08:20:37 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:50.925 08:20:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:50.925 08:20:37 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:08:50.925 08:20:37 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:50.925 08:20:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:50.925 08:20:37 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:08:50.925 08:20:37 json_config -- json_config/common.sh@9 -- # local app=target 00:08:50.925 08:20:37 json_config -- json_config/common.sh@10 -- # shift 00:08:50.925 08:20:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:50.925 08:20:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:50.925 08:20:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:50.925 08:20:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:50.925 08:20:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:50.925 08:20:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=137914 00:08:50.925 08:20:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:50.925 Waiting for target to run... 00:08:50.925 08:20:37 json_config -- json_config/common.sh@25 -- # waitforlisten 137914 /var/tmp/spdk_tgt.sock 00:08:50.925 08:20:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:50.925 08:20:37 json_config -- common/autotest_common.sh@827 -- # '[' -z 137914 ']' 00:08:50.925 08:20:37 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:50.925 08:20:37 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:50.925 08:20:37 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:50.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:50.925 08:20:37 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:50.925 08:20:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:50.925 [2024-05-15 08:20:37.878510] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:08:50.925 [2024-05-15 08:20:37.878556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137914 ] 00:08:50.925 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.184 [2024-05-15 08:20:38.159532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.443 [2024-05-15 08:20:38.226800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.703 08:20:38 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:51.703 08:20:38 json_config -- common/autotest_common.sh@860 -- # return 0 00:08:51.703 08:20:38 json_config -- json_config/common.sh@26 -- # echo '' 00:08:51.703 00:08:51.703 08:20:38 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:08:51.703 08:20:38 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:08:51.703 08:20:38 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:51.703 08:20:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:51.703 08:20:38 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:08:51.703 08:20:38 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:08:51.703 08:20:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.703 08:20:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:51.703 08:20:38 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:51.703 08:20:38 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:08:51.703 08:20:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:54.995 08:20:41 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:08:54.995 08:20:41 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:54.995 08:20:41 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:54.995 08:20:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:54.995 08:20:41 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:54.995 08:20:41 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:54.995 08:20:41 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:54.995 08:20:41 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:08:54.995 08:20:41 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:08:54.995 08:20:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:54.995 08:20:41 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:08:54.995 08:20:41 json_config -- json_config/json_config.sh@48 -- # local get_types 00:08:54.995 08:20:41 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:08:54.995 08:20:41 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:08:54.995 08:20:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.995 08:20:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:54.995 08:20:41 json_config -- json_config/json_config.sh@55 -- # return 0 00:08:54.995 08:20:41 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:08:54.995 08:20:41 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:08:54.995 08:20:41 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:08:54.995 08:20:41 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:08:54.995 08:20:41 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:08:54.995 08:20:41 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:08:54.995 08:20:41 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:54.995 08:20:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:54.995 08:20:42 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:54.995 08:20:42 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:08:54.995 08:20:42 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:08:54.995 08:20:42 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:54.995 08:20:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:55.256 MallocForNvmf0 00:08:55.256 08:20:42 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:55.256 08:20:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:55.516 MallocForNvmf1 00:08:55.516 08:20:42 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:55.516 08:20:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:55.516 [2024-05-15 08:20:42.515133] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.775 08:20:42 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:55.775 08:20:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:55.775 08:20:42 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:55.775 08:20:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:56.035 08:20:42 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:56.035 08:20:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:56.294 08:20:43 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:56.294 08:20:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:56.294 [2024-05-15 08:20:43.241119] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:56.294 [2024-05-15 08:20:43.241470] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:56.294 08:20:43 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:08:56.294 08:20:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.294 08:20:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:56.294 08:20:43 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:08:56.294 08:20:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.294 08:20:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:56.552 08:20:43 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:08:56.552 08:20:43 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:56.552 08:20:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:56.552 MallocBdevForConfigChangeCheck 00:08:56.552 08:20:43 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:08:56.552 08:20:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.552 08:20:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:56.552 08:20:43 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:08:56.552 08:20:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:57.121 08:20:43 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:08:57.121 INFO: shutting down applications... 00:08:57.121 08:20:43 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:08:57.121 08:20:43 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:08:57.121 08:20:43 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:08:57.121 08:20:43 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:58.502 Calling clear_iscsi_subsystem 00:08:58.502 Calling clear_nvmf_subsystem 00:08:58.502 Calling clear_nbd_subsystem 00:08:58.502 Calling clear_ublk_subsystem 00:08:58.502 Calling clear_vhost_blk_subsystem 00:08:58.502 Calling clear_vhost_scsi_subsystem 00:08:58.502 Calling clear_bdev_subsystem 00:08:58.502 08:20:45 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:08:58.502 08:20:45 json_config -- json_config/json_config.sh@343 -- # count=100 00:08:58.502 08:20:45 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:08:58.502 08:20:45 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:58.502 08:20:45 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:58.502 08:20:45 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:59.071 08:20:45 json_config -- json_config/json_config.sh@345 -- # break 00:08:59.071 08:20:45 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:08:59.071 08:20:45 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:08:59.071 08:20:45 json_config -- json_config/common.sh@31 -- # local app=target 00:08:59.071 08:20:45 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:59.071 08:20:45 json_config -- json_config/common.sh@35 -- # [[ -n 137914 ]] 00:08:59.071 08:20:45 json_config -- json_config/common.sh@38 -- # kill -SIGINT 137914 00:08:59.071 [2024-05-15 08:20:45.814212] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:59.071 08:20:45 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:59.071 08:20:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:59.071 08:20:45 json_config -- json_config/common.sh@41 -- # kill -0 137914 00:08:59.071 08:20:45 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:59.331 08:20:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:59.331 08:20:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:59.331 08:20:46 json_config -- json_config/common.sh@41 -- # kill -0 137914 00:08:59.331 08:20:46 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:59.331 08:20:46 json_config -- json_config/common.sh@43 -- # break 00:08:59.331 08:20:46 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:59.331 08:20:46 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:59.331 SPDK target shutdown done 00:08:59.331 08:20:46 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:08:59.331 INFO: relaunching applications... 00:08:59.331 08:20:46 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:59.331 08:20:46 json_config -- json_config/common.sh@9 -- # local app=target 00:08:59.331 08:20:46 json_config -- json_config/common.sh@10 -- # shift 00:08:59.331 08:20:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:59.331 08:20:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:59.331 08:20:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:59.331 08:20:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:59.331 08:20:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:59.331 08:20:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=139426 00:08:59.332 08:20:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:59.332 Waiting for target to run... 00:08:59.332 08:20:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:59.332 08:20:46 json_config -- json_config/common.sh@25 -- # waitforlisten 139426 /var/tmp/spdk_tgt.sock 00:08:59.332 08:20:46 json_config -- common/autotest_common.sh@827 -- # '[' -z 139426 ']' 00:08:59.332 08:20:46 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:59.332 08:20:46 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:59.332 08:20:46 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:59.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:59.332 08:20:46 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:59.332 08:20:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:59.591 [2024-05-15 08:20:46.370627] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:08:59.591 [2024-05-15 08:20:46.370684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139426 ] 00:08:59.591 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.850 [2024-05-15 08:20:46.665487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.850 [2024-05-15 08:20:46.733904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.143 [2024-05-15 08:20:49.733447] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.143 [2024-05-15 08:20:49.765450] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:03.143 [2024-05-15 08:20:49.765766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:03.143 08:20:49 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:03.143 08:20:49 json_config -- common/autotest_common.sh@860 -- # return 0 00:09:03.143 08:20:49 json_config -- json_config/common.sh@26 -- # echo '' 00:09:03.143 00:09:03.143 08:20:49 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:09:03.143 08:20:49 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:03.143 INFO: Checking if target configuration is the same... 00:09:03.143 08:20:49 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:09:03.143 08:20:49 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:03.143 08:20:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:03.143 + '[' 2 -ne 2 ']' 00:09:03.143 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:03.143 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:09:03.143 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:03.143 +++ basename /dev/fd/62 00:09:03.143 ++ mktemp /tmp/62.XXX 00:09:03.143 + tmp_file_1=/tmp/62.aZB 00:09:03.143 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:03.143 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:03.143 + tmp_file_2=/tmp/spdk_tgt_config.json.XdH 00:09:03.143 + ret=0 00:09:03.143 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:03.143 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:03.403 + diff -u /tmp/62.aZB /tmp/spdk_tgt_config.json.XdH 00:09:03.403 + echo 'INFO: JSON config files are the same' 00:09:03.403 INFO: JSON config files are the same 00:09:03.403 + rm /tmp/62.aZB /tmp/spdk_tgt_config.json.XdH 00:09:03.403 + exit 0 00:09:03.403 08:20:50 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:09:03.403 08:20:50 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:03.403 INFO: changing configuration and checking if this can be detected... 00:09:03.403 08:20:50 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:03.403 08:20:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:03.403 08:20:50 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:03.403 08:20:50 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:09:03.403 08:20:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:03.403 + '[' 2 -ne 2 ']' 00:09:03.403 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:03.403 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:09:03.403 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:03.403 +++ basename /dev/fd/62 00:09:03.403 ++ mktemp /tmp/62.XXX 00:09:03.403 + tmp_file_1=/tmp/62.5g1 00:09:03.403 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:03.403 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:03.403 + tmp_file_2=/tmp/spdk_tgt_config.json.cGd 00:09:03.403 + ret=0 00:09:03.403 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:03.662 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:03.922 + diff -u /tmp/62.5g1 /tmp/spdk_tgt_config.json.cGd 00:09:03.922 + ret=1 00:09:03.922 + echo '=== Start of file: /tmp/62.5g1 ===' 00:09:03.922 + cat /tmp/62.5g1 00:09:03.922 + echo '=== End of file: /tmp/62.5g1 ===' 00:09:03.922 + echo '' 00:09:03.922 + echo '=== Start of file: /tmp/spdk_tgt_config.json.cGd ===' 00:09:03.922 + cat /tmp/spdk_tgt_config.json.cGd 00:09:03.922 + echo '=== End of file: /tmp/spdk_tgt_config.json.cGd ===' 00:09:03.922 + echo '' 00:09:03.922 + rm /tmp/62.5g1 /tmp/spdk_tgt_config.json.cGd 00:09:03.922 + exit 1 00:09:03.922 08:20:50 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:09:03.922 INFO: configuration change detected. 00:09:03.922 08:20:50 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:09:03.922 08:20:50 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:09:03.922 08:20:50 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:03.922 08:20:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:03.922 08:20:50 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:09:03.922 08:20:50 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:09:03.922 08:20:50 json_config -- json_config/json_config.sh@317 -- # [[ -n 139426 ]] 00:09:03.922 08:20:50 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:09:03.922 08:20:50 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:09:03.922 08:20:50 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:03.922 08:20:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:03.922 08:20:50 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:09:03.922 08:20:50 json_config -- json_config/json_config.sh@193 -- # uname -s 00:09:03.922 08:20:50 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:09:03.922 08:20:50 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:09:03.922 08:20:50 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:09:03.922 08:20:50 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:09:03.922 08:20:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:03.922 08:20:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:03.922 08:20:50 json_config -- json_config/json_config.sh@323 -- # killprocess 139426 00:09:03.922 08:20:50 json_config -- common/autotest_common.sh@946 -- # '[' -z 139426 ']' 00:09:03.922 08:20:50 json_config -- common/autotest_common.sh@950 -- # kill -0 139426 00:09:03.922 08:20:50 json_config -- common/autotest_common.sh@951 -- # uname 00:09:03.922 08:20:50 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:03.922 08:20:50 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 139426 00:09:03.922 08:20:50 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:03.922 08:20:50 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:03.922 08:20:50 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 139426' 00:09:03.922 killing process with pid 139426 00:09:03.922 08:20:50 json_config -- common/autotest_common.sh@965 -- # kill 139426 00:09:03.922 [2024-05-15 08:20:50.835431] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:03.922 08:20:50 json_config -- common/autotest_common.sh@970 -- # wait 139426 00:09:05.829 08:20:52 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:05.829 08:20:52 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:09:05.829 08:20:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:05.829 08:20:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:05.829 08:20:52 json_config -- json_config/json_config.sh@328 -- # return 0 00:09:05.829 08:20:52 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:09:05.829 INFO: Success 00:09:05.829 00:09:05.829 real 0m14.690s 00:09:05.829 user 0m15.652s 00:09:05.829 sys 0m1.731s 00:09:05.829 08:20:52 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:05.829 08:20:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:05.829 ************************************ 00:09:05.829 END TEST json_config 00:09:05.829 ************************************ 00:09:05.829 08:20:52 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:05.829 08:20:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:05.829 08:20:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:05.829 08:20:52 -- common/autotest_common.sh@10 -- # set +x 00:09:05.829 ************************************ 00:09:05.829 START TEST json_config_extra_key 00:09:05.829 ************************************ 00:09:05.829 08:20:52 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:05.829 08:20:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.829 08:20:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:05.829 08:20:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.829 08:20:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.829 08:20:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.829 08:20:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.829 08:20:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.829 08:20:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.829 08:20:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.829 08:20:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.829 08:20:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.830 08:20:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.830 08:20:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:05.830 08:20:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:05.830 08:20:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.830 08:20:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.830 08:20:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:05.830 08:20:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.830 08:20:52 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.830 08:20:52 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.830 08:20:52 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.830 08:20:52 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.830 08:20:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.830 08:20:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.830 08:20:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.830 08:20:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:05.830 08:20:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.830 08:20:52 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:09:05.830 08:20:52 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:05.830 08:20:52 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:05.830 08:20:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.830 08:20:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.830 08:20:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.830 08:20:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:05.830 08:20:52 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:05.830 08:20:52 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:05.830 08:20:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:09:05.830 08:20:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:05.830 08:20:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:05.830 08:20:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:05.830 08:20:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:05.830 08:20:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:05.830 08:20:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:05.830 08:20:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:09:05.830 08:20:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:05.830 08:20:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:05.830 08:20:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:05.830 INFO: launching applications... 00:09:05.830 08:20:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:05.830 08:20:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:05.830 08:20:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:05.830 08:20:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:05.830 08:20:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:05.830 08:20:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:05.830 08:20:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:05.830 08:20:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:05.830 08:20:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=140541 00:09:05.830 08:20:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:05.830 Waiting for target to run... 00:09:05.830 08:20:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 140541 /var/tmp/spdk_tgt.sock 00:09:05.830 08:20:52 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 140541 ']' 00:09:05.830 08:20:52 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:05.830 08:20:52 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:05.830 08:20:52 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:05.830 08:20:52 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:05.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:05.830 08:20:52 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:05.830 08:20:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:05.830 [2024-05-15 08:20:52.638192] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:05.830 [2024-05-15 08:20:52.638250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140541 ] 00:09:05.830 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.090 [2024-05-15 08:20:52.922065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.090 [2024-05-15 08:20:52.990040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.657 08:20:53 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:06.657 08:20:53 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:09:06.657 08:20:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:06.657 00:09:06.657 08:20:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:06.657 INFO: shutting down applications... 00:09:06.657 08:20:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:06.657 08:20:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:06.657 08:20:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:06.657 08:20:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 140541 ]] 00:09:06.657 08:20:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 140541 00:09:06.657 08:20:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:06.657 08:20:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:06.657 08:20:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 140541 00:09:06.657 08:20:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:07.227 08:20:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:07.227 08:20:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:07.227 08:20:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 140541 00:09:07.227 08:20:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:07.227 08:20:53 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:07.227 08:20:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:07.227 08:20:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:07.227 SPDK target shutdown done 00:09:07.227 08:20:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:07.227 Success 00:09:07.227 00:09:07.227 real 0m1.457s 00:09:07.227 user 0m1.257s 00:09:07.227 sys 0m0.385s 00:09:07.227 08:20:53 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:07.227 08:20:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:07.227 ************************************ 00:09:07.227 END TEST json_config_extra_key 00:09:07.227 ************************************ 00:09:07.227 08:20:53 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:07.227 08:20:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:07.227 08:20:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:07.227 08:20:53 -- common/autotest_common.sh@10 -- # set +x 00:09:07.227 ************************************ 00:09:07.227 START TEST alias_rpc 00:09:07.227 ************************************ 00:09:07.227 08:20:54 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:07.227 * Looking for test storage... 00:09:07.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:09:07.227 08:20:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:07.227 08:20:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=140902 00:09:07.227 08:20:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 140902 00:09:07.227 08:20:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:07.227 08:20:54 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 140902 ']' 00:09:07.227 08:20:54 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.227 08:20:54 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:07.227 08:20:54 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.227 08:20:54 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:07.227 08:20:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.227 [2024-05-15 08:20:54.160010] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:07.227 [2024-05-15 08:20:54.160065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140902 ] 00:09:07.227 EAL: No free 2048 kB hugepages reported on node 1 00:09:07.227 [2024-05-15 08:20:54.228799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.487 [2024-05-15 08:20:54.309178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.055 08:20:54 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:08.055 08:20:54 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:09:08.055 08:20:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:09:08.314 08:20:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 140902 00:09:08.314 08:20:55 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 140902 ']' 00:09:08.314 08:20:55 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 140902 00:09:08.314 08:20:55 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:09:08.314 08:20:55 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:08.314 08:20:55 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 140902 00:09:08.314 08:20:55 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:08.314 08:20:55 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:08.315 08:20:55 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 140902' 00:09:08.315 killing process with pid 140902 00:09:08.315 08:20:55 alias_rpc -- common/autotest_common.sh@965 -- # kill 140902 00:09:08.315 08:20:55 alias_rpc -- common/autotest_common.sh@970 -- # wait 140902 00:09:08.574 00:09:08.574 real 0m1.511s 00:09:08.574 user 0m1.654s 00:09:08.574 sys 0m0.390s 00:09:08.574 08:20:55 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:08.574 08:20:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.574 ************************************ 00:09:08.574 END TEST alias_rpc 00:09:08.574 ************************************ 00:09:08.574 08:20:55 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:09:08.574 08:20:55 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:08.574 08:20:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:08.574 08:20:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:08.574 08:20:55 -- common/autotest_common.sh@10 -- # set +x 00:09:08.834 ************************************ 00:09:08.834 START TEST spdkcli_tcp 00:09:08.834 ************************************ 00:09:08.834 08:20:55 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:08.834 * Looking for test storage... 00:09:08.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:09:08.834 08:20:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:09:08.834 08:20:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:09:08.834 08:20:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:09:08.834 08:20:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:08.834 08:20:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:08.834 08:20:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:08.834 08:20:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:08.834 08:20:55 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:08.834 08:20:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:08.834 08:20:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=141250 00:09:08.834 08:20:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:08.834 08:20:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 141250 00:09:08.834 08:20:55 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 141250 ']' 00:09:08.834 08:20:55 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.834 08:20:55 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:08.834 08:20:55 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.834 08:20:55 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:08.834 08:20:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:08.834 [2024-05-15 08:20:55.743146] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:08.834 [2024-05-15 08:20:55.743197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141250 ] 00:09:08.834 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.834 [2024-05-15 08:20:55.807997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:09.093 [2024-05-15 08:20:55.881821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.093 [2024-05-15 08:20:55.881822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.662 08:20:56 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:09.662 08:20:56 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:09:09.662 08:20:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:09.662 08:20:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=141265 00:09:09.662 08:20:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:09.921 [ 00:09:09.921 "bdev_malloc_delete", 00:09:09.921 "bdev_malloc_create", 00:09:09.921 "bdev_null_resize", 00:09:09.921 "bdev_null_delete", 00:09:09.921 "bdev_null_create", 00:09:09.921 "bdev_nvme_cuse_unregister", 00:09:09.921 "bdev_nvme_cuse_register", 00:09:09.921 "bdev_opal_new_user", 00:09:09.921 "bdev_opal_set_lock_state", 00:09:09.921 "bdev_opal_delete", 00:09:09.921 "bdev_opal_get_info", 00:09:09.921 "bdev_opal_create", 00:09:09.921 "bdev_nvme_opal_revert", 00:09:09.921 "bdev_nvme_opal_init", 00:09:09.921 "bdev_nvme_send_cmd", 00:09:09.921 "bdev_nvme_get_path_iostat", 00:09:09.921 "bdev_nvme_get_mdns_discovery_info", 00:09:09.921 "bdev_nvme_stop_mdns_discovery", 00:09:09.921 "bdev_nvme_start_mdns_discovery", 00:09:09.921 "bdev_nvme_set_multipath_policy", 00:09:09.921 "bdev_nvme_set_preferred_path", 00:09:09.921 "bdev_nvme_get_io_paths", 00:09:09.921 "bdev_nvme_remove_error_injection", 00:09:09.921 "bdev_nvme_add_error_injection", 00:09:09.921 "bdev_nvme_get_discovery_info", 00:09:09.921 "bdev_nvme_stop_discovery", 00:09:09.921 "bdev_nvme_start_discovery", 00:09:09.921 "bdev_nvme_get_controller_health_info", 00:09:09.921 "bdev_nvme_disable_controller", 00:09:09.921 "bdev_nvme_enable_controller", 00:09:09.921 "bdev_nvme_reset_controller", 00:09:09.921 "bdev_nvme_get_transport_statistics", 00:09:09.921 "bdev_nvme_apply_firmware", 00:09:09.921 "bdev_nvme_detach_controller", 00:09:09.921 "bdev_nvme_get_controllers", 00:09:09.921 "bdev_nvme_attach_controller", 00:09:09.921 "bdev_nvme_set_hotplug", 00:09:09.921 "bdev_nvme_set_options", 00:09:09.921 "bdev_passthru_delete", 00:09:09.921 "bdev_passthru_create", 00:09:09.921 "bdev_lvol_check_shallow_copy", 00:09:09.921 "bdev_lvol_start_shallow_copy", 00:09:09.921 "bdev_lvol_grow_lvstore", 00:09:09.921 "bdev_lvol_get_lvols", 00:09:09.921 "bdev_lvol_get_lvstores", 00:09:09.921 "bdev_lvol_delete", 00:09:09.921 "bdev_lvol_set_read_only", 00:09:09.921 "bdev_lvol_resize", 00:09:09.922 "bdev_lvol_decouple_parent", 00:09:09.922 "bdev_lvol_inflate", 00:09:09.922 "bdev_lvol_rename", 00:09:09.922 "bdev_lvol_clone_bdev", 00:09:09.922 "bdev_lvol_clone", 00:09:09.922 "bdev_lvol_snapshot", 00:09:09.922 "bdev_lvol_create", 00:09:09.922 "bdev_lvol_delete_lvstore", 00:09:09.922 "bdev_lvol_rename_lvstore", 00:09:09.922 "bdev_lvol_create_lvstore", 00:09:09.922 "bdev_raid_set_options", 00:09:09.922 "bdev_raid_remove_base_bdev", 00:09:09.922 "bdev_raid_add_base_bdev", 00:09:09.922 "bdev_raid_delete", 00:09:09.922 "bdev_raid_create", 00:09:09.922 "bdev_raid_get_bdevs", 00:09:09.922 "bdev_error_inject_error", 00:09:09.922 "bdev_error_delete", 00:09:09.922 "bdev_error_create", 00:09:09.922 "bdev_split_delete", 00:09:09.922 "bdev_split_create", 00:09:09.922 "bdev_delay_delete", 00:09:09.922 "bdev_delay_create", 00:09:09.922 "bdev_delay_update_latency", 00:09:09.922 "bdev_zone_block_delete", 00:09:09.922 "bdev_zone_block_create", 00:09:09.922 "blobfs_create", 00:09:09.922 "blobfs_detect", 00:09:09.922 "blobfs_set_cache_size", 00:09:09.922 "bdev_aio_delete", 00:09:09.922 "bdev_aio_rescan", 00:09:09.922 "bdev_aio_create", 00:09:09.922 "bdev_ftl_set_property", 00:09:09.922 "bdev_ftl_get_properties", 00:09:09.922 "bdev_ftl_get_stats", 00:09:09.922 "bdev_ftl_unmap", 00:09:09.922 "bdev_ftl_unload", 00:09:09.922 "bdev_ftl_delete", 00:09:09.922 "bdev_ftl_load", 00:09:09.922 "bdev_ftl_create", 00:09:09.922 "bdev_virtio_attach_controller", 00:09:09.922 "bdev_virtio_scsi_get_devices", 00:09:09.922 "bdev_virtio_detach_controller", 00:09:09.922 "bdev_virtio_blk_set_hotplug", 00:09:09.922 "bdev_iscsi_delete", 00:09:09.922 "bdev_iscsi_create", 00:09:09.922 "bdev_iscsi_set_options", 00:09:09.922 "accel_error_inject_error", 00:09:09.922 "ioat_scan_accel_module", 00:09:09.922 "dsa_scan_accel_module", 00:09:09.922 "iaa_scan_accel_module", 00:09:09.922 "vfu_virtio_create_scsi_endpoint", 00:09:09.922 "vfu_virtio_scsi_remove_target", 00:09:09.922 "vfu_virtio_scsi_add_target", 00:09:09.922 "vfu_virtio_create_blk_endpoint", 00:09:09.922 "vfu_virtio_delete_endpoint", 00:09:09.922 "keyring_file_remove_key", 00:09:09.922 "keyring_file_add_key", 00:09:09.922 "iscsi_get_histogram", 00:09:09.922 "iscsi_enable_histogram", 00:09:09.922 "iscsi_set_options", 00:09:09.922 "iscsi_get_auth_groups", 00:09:09.922 "iscsi_auth_group_remove_secret", 00:09:09.922 "iscsi_auth_group_add_secret", 00:09:09.922 "iscsi_delete_auth_group", 00:09:09.922 "iscsi_create_auth_group", 00:09:09.922 "iscsi_set_discovery_auth", 00:09:09.922 "iscsi_get_options", 00:09:09.922 "iscsi_target_node_request_logout", 00:09:09.922 "iscsi_target_node_set_redirect", 00:09:09.922 "iscsi_target_node_set_auth", 00:09:09.922 "iscsi_target_node_add_lun", 00:09:09.922 "iscsi_get_stats", 00:09:09.922 "iscsi_get_connections", 00:09:09.922 "iscsi_portal_group_set_auth", 00:09:09.922 "iscsi_start_portal_group", 00:09:09.922 "iscsi_delete_portal_group", 00:09:09.922 "iscsi_create_portal_group", 00:09:09.922 "iscsi_get_portal_groups", 00:09:09.922 "iscsi_delete_target_node", 00:09:09.922 "iscsi_target_node_remove_pg_ig_maps", 00:09:09.922 "iscsi_target_node_add_pg_ig_maps", 00:09:09.922 "iscsi_create_target_node", 00:09:09.922 "iscsi_get_target_nodes", 00:09:09.922 "iscsi_delete_initiator_group", 00:09:09.922 "iscsi_initiator_group_remove_initiators", 00:09:09.922 "iscsi_initiator_group_add_initiators", 00:09:09.922 "iscsi_create_initiator_group", 00:09:09.922 "iscsi_get_initiator_groups", 00:09:09.922 "nvmf_set_crdt", 00:09:09.922 "nvmf_set_config", 00:09:09.922 "nvmf_set_max_subsystems", 00:09:09.922 "nvmf_subsystem_get_listeners", 00:09:09.922 "nvmf_subsystem_get_qpairs", 00:09:09.922 "nvmf_subsystem_get_controllers", 00:09:09.922 "nvmf_get_stats", 00:09:09.922 "nvmf_get_transports", 00:09:09.922 "nvmf_create_transport", 00:09:09.922 "nvmf_get_targets", 00:09:09.922 "nvmf_delete_target", 00:09:09.922 "nvmf_create_target", 00:09:09.922 "nvmf_subsystem_allow_any_host", 00:09:09.922 "nvmf_subsystem_remove_host", 00:09:09.922 "nvmf_subsystem_add_host", 00:09:09.922 "nvmf_ns_remove_host", 00:09:09.922 "nvmf_ns_add_host", 00:09:09.922 "nvmf_subsystem_remove_ns", 00:09:09.922 "nvmf_subsystem_add_ns", 00:09:09.922 "nvmf_subsystem_listener_set_ana_state", 00:09:09.922 "nvmf_discovery_get_referrals", 00:09:09.922 "nvmf_discovery_remove_referral", 00:09:09.922 "nvmf_discovery_add_referral", 00:09:09.922 "nvmf_subsystem_remove_listener", 00:09:09.922 "nvmf_subsystem_add_listener", 00:09:09.922 "nvmf_delete_subsystem", 00:09:09.922 "nvmf_create_subsystem", 00:09:09.922 "nvmf_get_subsystems", 00:09:09.922 "env_dpdk_get_mem_stats", 00:09:09.922 "nbd_get_disks", 00:09:09.922 "nbd_stop_disk", 00:09:09.922 "nbd_start_disk", 00:09:09.922 "ublk_recover_disk", 00:09:09.922 "ublk_get_disks", 00:09:09.922 "ublk_stop_disk", 00:09:09.922 "ublk_start_disk", 00:09:09.922 "ublk_destroy_target", 00:09:09.922 "ublk_create_target", 00:09:09.922 "virtio_blk_create_transport", 00:09:09.922 "virtio_blk_get_transports", 00:09:09.922 "vhost_controller_set_coalescing", 00:09:09.922 "vhost_get_controllers", 00:09:09.922 "vhost_delete_controller", 00:09:09.922 "vhost_create_blk_controller", 00:09:09.922 "vhost_scsi_controller_remove_target", 00:09:09.922 "vhost_scsi_controller_add_target", 00:09:09.922 "vhost_start_scsi_controller", 00:09:09.922 "vhost_create_scsi_controller", 00:09:09.922 "thread_set_cpumask", 00:09:09.922 "framework_get_scheduler", 00:09:09.922 "framework_set_scheduler", 00:09:09.922 "framework_get_reactors", 00:09:09.922 "thread_get_io_channels", 00:09:09.922 "thread_get_pollers", 00:09:09.922 "thread_get_stats", 00:09:09.922 "framework_monitor_context_switch", 00:09:09.922 "spdk_kill_instance", 00:09:09.922 "log_enable_timestamps", 00:09:09.922 "log_get_flags", 00:09:09.922 "log_clear_flag", 00:09:09.922 "log_set_flag", 00:09:09.922 "log_get_level", 00:09:09.922 "log_set_level", 00:09:09.922 "log_get_print_level", 00:09:09.922 "log_set_print_level", 00:09:09.922 "framework_enable_cpumask_locks", 00:09:09.922 "framework_disable_cpumask_locks", 00:09:09.922 "framework_wait_init", 00:09:09.922 "framework_start_init", 00:09:09.922 "scsi_get_devices", 00:09:09.922 "bdev_get_histogram", 00:09:09.922 "bdev_enable_histogram", 00:09:09.922 "bdev_set_qos_limit", 00:09:09.922 "bdev_set_qd_sampling_period", 00:09:09.922 "bdev_get_bdevs", 00:09:09.922 "bdev_reset_iostat", 00:09:09.922 "bdev_get_iostat", 00:09:09.922 "bdev_examine", 00:09:09.922 "bdev_wait_for_examine", 00:09:09.922 "bdev_set_options", 00:09:09.922 "notify_get_notifications", 00:09:09.922 "notify_get_types", 00:09:09.922 "accel_get_stats", 00:09:09.922 "accel_set_options", 00:09:09.922 "accel_set_driver", 00:09:09.922 "accel_crypto_key_destroy", 00:09:09.922 "accel_crypto_keys_get", 00:09:09.922 "accel_crypto_key_create", 00:09:09.922 "accel_assign_opc", 00:09:09.922 "accel_get_module_info", 00:09:09.922 "accel_get_opc_assignments", 00:09:09.922 "vmd_rescan", 00:09:09.922 "vmd_remove_device", 00:09:09.922 "vmd_enable", 00:09:09.922 "sock_get_default_impl", 00:09:09.922 "sock_set_default_impl", 00:09:09.922 "sock_impl_set_options", 00:09:09.922 "sock_impl_get_options", 00:09:09.922 "iobuf_get_stats", 00:09:09.922 "iobuf_set_options", 00:09:09.922 "keyring_get_keys", 00:09:09.922 "framework_get_pci_devices", 00:09:09.922 "framework_get_config", 00:09:09.922 "framework_get_subsystems", 00:09:09.922 "vfu_tgt_set_base_path", 00:09:09.922 "trace_get_info", 00:09:09.922 "trace_get_tpoint_group_mask", 00:09:09.922 "trace_disable_tpoint_group", 00:09:09.922 "trace_enable_tpoint_group", 00:09:09.922 "trace_clear_tpoint_mask", 00:09:09.922 "trace_set_tpoint_mask", 00:09:09.922 "spdk_get_version", 00:09:09.922 "rpc_get_methods" 00:09:09.922 ] 00:09:09.922 08:20:56 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:09.922 08:20:56 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:09.922 08:20:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.922 08:20:56 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:09.922 08:20:56 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 141250 00:09:09.922 08:20:56 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 141250 ']' 00:09:09.922 08:20:56 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 141250 00:09:09.922 08:20:56 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:09:09.922 08:20:56 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:09.922 08:20:56 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 141250 00:09:09.922 08:20:56 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:09.922 08:20:56 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:09.922 08:20:56 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 141250' 00:09:09.922 killing process with pid 141250 00:09:09.922 08:20:56 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 141250 00:09:09.922 08:20:56 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 141250 00:09:10.182 00:09:10.182 real 0m1.546s 00:09:10.182 user 0m2.874s 00:09:10.182 sys 0m0.431s 00:09:10.182 08:20:57 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:10.182 08:20:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:10.182 ************************************ 00:09:10.182 END TEST spdkcli_tcp 00:09:10.182 ************************************ 00:09:10.182 08:20:57 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:10.182 08:20:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:10.182 08:20:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:10.182 08:20:57 -- common/autotest_common.sh@10 -- # set +x 00:09:10.441 ************************************ 00:09:10.441 START TEST dpdk_mem_utility 00:09:10.441 ************************************ 00:09:10.441 08:20:57 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:10.441 * Looking for test storage... 00:09:10.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:09:10.441 08:20:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:10.441 08:20:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=141551 00:09:10.441 08:20:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 141551 00:09:10.441 08:20:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:10.441 08:20:57 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 141551 ']' 00:09:10.441 08:20:57 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.441 08:20:57 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:10.441 08:20:57 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.441 08:20:57 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:10.441 08:20:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:10.441 [2024-05-15 08:20:57.360115] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:10.441 [2024-05-15 08:20:57.360160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141551 ] 00:09:10.441 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.441 [2024-05-15 08:20:57.425753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.701 [2024-05-15 08:20:57.504781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.272 08:20:58 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:11.272 08:20:58 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:09:11.272 08:20:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:11.272 08:20:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:11.272 08:20:58 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.272 08:20:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:11.272 { 00:09:11.272 "filename": "/tmp/spdk_mem_dump.txt" 00:09:11.272 } 00:09:11.272 08:20:58 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.272 08:20:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:11.272 DPDK memory size 814.000000 MiB in 1 heap(s) 00:09:11.272 1 heaps totaling size 814.000000 MiB 00:09:11.272 size: 814.000000 MiB heap id: 0 00:09:11.272 end heaps---------- 00:09:11.272 8 mempools totaling size 598.116089 MiB 00:09:11.272 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:11.272 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:11.272 size: 84.521057 MiB name: bdev_io_141551 00:09:11.272 size: 51.011292 MiB name: evtpool_141551 00:09:11.272 size: 50.003479 MiB name: msgpool_141551 00:09:11.272 size: 21.763794 MiB name: PDU_Pool 00:09:11.272 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:11.272 size: 0.026123 MiB name: Session_Pool 00:09:11.272 end mempools------- 00:09:11.272 6 memzones totaling size 4.142822 MiB 00:09:11.272 size: 1.000366 MiB name: RG_ring_0_141551 00:09:11.272 size: 1.000366 MiB name: RG_ring_1_141551 00:09:11.272 size: 1.000366 MiB name: RG_ring_4_141551 00:09:11.272 size: 1.000366 MiB name: RG_ring_5_141551 00:09:11.272 size: 0.125366 MiB name: RG_ring_2_141551 00:09:11.272 size: 0.015991 MiB name: RG_ring_3_141551 00:09:11.272 end memzones------- 00:09:11.272 08:20:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:09:11.272 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:09:11.272 list of free elements. size: 12.519348 MiB 00:09:11.272 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:11.272 element at address: 0x200018e00000 with size: 0.999878 MiB 00:09:11.272 element at address: 0x200019000000 with size: 0.999878 MiB 00:09:11.272 element at address: 0x200003e00000 with size: 0.996277 MiB 00:09:11.272 element at address: 0x200031c00000 with size: 0.994446 MiB 00:09:11.272 element at address: 0x200013800000 with size: 0.978699 MiB 00:09:11.272 element at address: 0x200007000000 with size: 0.959839 MiB 00:09:11.272 element at address: 0x200019200000 with size: 0.936584 MiB 00:09:11.272 element at address: 0x200000200000 with size: 0.841614 MiB 00:09:11.272 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:09:11.272 element at address: 0x20000b200000 with size: 0.490723 MiB 00:09:11.272 element at address: 0x200000800000 with size: 0.487793 MiB 00:09:11.272 element at address: 0x200019400000 with size: 0.485657 MiB 00:09:11.272 element at address: 0x200027e00000 with size: 0.410034 MiB 00:09:11.272 element at address: 0x200003a00000 with size: 0.355530 MiB 00:09:11.272 list of standard malloc elements. size: 199.218079 MiB 00:09:11.272 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:09:11.272 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:09:11.272 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:11.272 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:09:11.272 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:11.272 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:11.272 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:09:11.272 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:11.272 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:09:11.272 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:09:11.272 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:09:11.272 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:09:11.272 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:09:11.273 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:09:11.273 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:11.273 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:11.273 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:09:11.273 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:09:11.273 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:09:11.273 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:09:11.273 element at address: 0x200003adb300 with size: 0.000183 MiB 00:09:11.273 element at address: 0x200003adb500 with size: 0.000183 MiB 00:09:11.273 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:09:11.273 element at address: 0x200003affa80 with size: 0.000183 MiB 00:09:11.273 element at address: 0x200003affb40 with size: 0.000183 MiB 00:09:11.273 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:09:11.273 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:09:11.273 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:09:11.273 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:09:11.273 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:09:11.273 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:09:11.273 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:09:11.273 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:09:11.273 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:09:11.273 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:09:11.273 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:09:11.273 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:09:11.273 element at address: 0x200027e69040 with size: 0.000183 MiB 00:09:11.273 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:09:11.273 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:09:11.273 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:09:11.273 list of memzone associated elements. size: 602.262573 MiB 00:09:11.273 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:09:11.273 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:11.273 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:09:11.273 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:11.273 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:09:11.273 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_141551_0 00:09:11.273 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:11.273 associated memzone info: size: 48.002930 MiB name: MP_evtpool_141551_0 00:09:11.273 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:11.273 associated memzone info: size: 48.002930 MiB name: MP_msgpool_141551_0 00:09:11.273 element at address: 0x2000195be940 with size: 20.255554 MiB 00:09:11.273 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:11.273 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:09:11.273 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:11.273 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:11.273 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_141551 00:09:11.273 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:11.273 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_141551 00:09:11.273 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:11.273 associated memzone info: size: 1.007996 MiB name: MP_evtpool_141551 00:09:11.273 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:09:11.273 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:11.273 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:09:11.273 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:11.273 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:09:11.273 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:11.273 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:09:11.273 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:11.273 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:11.273 associated memzone info: size: 1.000366 MiB name: RG_ring_0_141551 00:09:11.273 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:11.273 associated memzone info: size: 1.000366 MiB name: RG_ring_1_141551 00:09:11.273 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:09:11.273 associated memzone info: size: 1.000366 MiB name: RG_ring_4_141551 00:09:11.273 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:09:11.273 associated memzone info: size: 1.000366 MiB name: RG_ring_5_141551 00:09:11.273 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:09:11.273 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_141551 00:09:11.273 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:09:11.273 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:11.273 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:09:11.273 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:11.273 element at address: 0x20001947c540 with size: 0.250488 MiB 00:09:11.273 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:11.273 element at address: 0x200003adf880 with size: 0.125488 MiB 00:09:11.273 associated memzone info: size: 0.125366 MiB name: RG_ring_2_141551 00:09:11.273 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:09:11.273 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:11.273 element at address: 0x200027e69100 with size: 0.023743 MiB 00:09:11.273 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:11.273 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:09:11.273 associated memzone info: size: 0.015991 MiB name: RG_ring_3_141551 00:09:11.273 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:09:11.273 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:11.273 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:09:11.273 associated memzone info: size: 0.000183 MiB name: MP_msgpool_141551 00:09:11.273 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:09:11.273 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_141551 00:09:11.273 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:09:11.273 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:11.273 08:20:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:11.273 08:20:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 141551 00:09:11.273 08:20:58 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 141551 ']' 00:09:11.273 08:20:58 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 141551 00:09:11.273 08:20:58 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:09:11.273 08:20:58 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:11.273 08:20:58 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 141551 00:09:11.533 08:20:58 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:11.533 08:20:58 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:11.533 08:20:58 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 141551' 00:09:11.533 killing process with pid 141551 00:09:11.533 08:20:58 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 141551 00:09:11.533 08:20:58 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 141551 00:09:11.793 00:09:11.793 real 0m1.417s 00:09:11.793 user 0m1.484s 00:09:11.793 sys 0m0.402s 00:09:11.793 08:20:58 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:11.793 08:20:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:11.793 ************************************ 00:09:11.793 END TEST dpdk_mem_utility 00:09:11.793 ************************************ 00:09:11.793 08:20:58 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:11.793 08:20:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:11.793 08:20:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:11.793 08:20:58 -- common/autotest_common.sh@10 -- # set +x 00:09:11.793 ************************************ 00:09:11.793 START TEST event 00:09:11.793 ************************************ 00:09:11.793 08:20:58 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:11.793 * Looking for test storage... 00:09:11.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:11.793 08:20:58 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:11.793 08:20:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:11.793 08:20:58 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:11.793 08:20:58 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:09:11.793 08:20:58 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:11.793 08:20:58 event -- common/autotest_common.sh@10 -- # set +x 00:09:12.052 ************************************ 00:09:12.052 START TEST event_perf 00:09:12.052 ************************************ 00:09:12.052 08:20:58 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:12.052 Running I/O for 1 seconds...[2024-05-15 08:20:58.854817] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:12.052 [2024-05-15 08:20:58.854876] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141841 ] 00:09:12.052 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.052 [2024-05-15 08:20:58.925331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.053 [2024-05-15 08:20:58.999765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.053 [2024-05-15 08:20:58.999874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.053 [2024-05-15 08:20:58.999981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.053 Running I/O for 1 seconds...[2024-05-15 08:20:58.999982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.431 00:09:13.431 lcore 0: 195649 00:09:13.431 lcore 1: 195650 00:09:13.431 lcore 2: 195649 00:09:13.431 lcore 3: 195650 00:09:13.431 done. 00:09:13.431 00:09:13.431 real 0m1.261s 00:09:13.431 user 0m4.170s 00:09:13.431 sys 0m0.087s 00:09:13.431 08:21:00 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:13.431 08:21:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:13.431 ************************************ 00:09:13.431 END TEST event_perf 00:09:13.431 ************************************ 00:09:13.432 08:21:00 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:13.432 08:21:00 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:13.432 08:21:00 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:13.432 08:21:00 event -- common/autotest_common.sh@10 -- # set +x 00:09:13.432 ************************************ 00:09:13.432 START TEST event_reactor 00:09:13.432 ************************************ 00:09:13.432 08:21:00 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:13.432 [2024-05-15 08:21:00.198056] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:13.432 [2024-05-15 08:21:00.198127] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142116 ] 00:09:13.432 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.432 [2024-05-15 08:21:00.268387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.432 [2024-05-15 08:21:00.341681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.816 test_start 00:09:14.816 oneshot 00:09:14.816 tick 100 00:09:14.816 tick 100 00:09:14.817 tick 250 00:09:14.817 tick 100 00:09:14.817 tick 100 00:09:14.817 tick 100 00:09:14.817 tick 250 00:09:14.817 tick 500 00:09:14.817 tick 100 00:09:14.817 tick 100 00:09:14.817 tick 250 00:09:14.817 tick 100 00:09:14.817 tick 100 00:09:14.817 test_end 00:09:14.817 00:09:14.817 real 0m1.256s 00:09:14.817 user 0m1.165s 00:09:14.817 sys 0m0.085s 00:09:14.817 08:21:01 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:14.817 08:21:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:14.817 ************************************ 00:09:14.817 END TEST event_reactor 00:09:14.817 ************************************ 00:09:14.817 08:21:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:14.817 08:21:01 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:14.817 08:21:01 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:14.817 08:21:01 event -- common/autotest_common.sh@10 -- # set +x 00:09:14.817 ************************************ 00:09:14.817 START TEST event_reactor_perf 00:09:14.817 ************************************ 00:09:14.817 08:21:01 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:14.817 [2024-05-15 08:21:01.528941] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:14.817 [2024-05-15 08:21:01.529006] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142455 ] 00:09:14.817 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.817 [2024-05-15 08:21:01.601646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.817 [2024-05-15 08:21:01.674346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.755 test_start 00:09:15.755 test_end 00:09:15.755 Performance: 491026 events per second 00:09:15.755 00:09:15.755 real 0m1.260s 00:09:15.755 user 0m1.162s 00:09:15.755 sys 0m0.093s 00:09:15.755 08:21:02 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:15.755 08:21:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:15.755 ************************************ 00:09:15.755 END TEST event_reactor_perf 00:09:15.755 ************************************ 00:09:16.016 08:21:02 event -- event/event.sh@49 -- # uname -s 00:09:16.016 08:21:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:16.016 08:21:02 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:16.016 08:21:02 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:16.016 08:21:02 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:16.016 08:21:02 event -- common/autotest_common.sh@10 -- # set +x 00:09:16.016 ************************************ 00:09:16.016 START TEST event_scheduler 00:09:16.016 ************************************ 00:09:16.016 08:21:02 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:16.016 * Looking for test storage... 00:09:16.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:09:16.016 08:21:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:16.016 08:21:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=142754 00:09:16.016 08:21:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:16.016 08:21:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:16.016 08:21:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 142754 00:09:16.016 08:21:02 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 142754 ']' 00:09:16.016 08:21:02 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.016 08:21:02 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:16.016 08:21:02 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.016 08:21:02 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:16.016 08:21:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:16.016 [2024-05-15 08:21:02.970933] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:16.016 [2024-05-15 08:21:02.970977] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142754 ] 00:09:16.016 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.275 [2024-05-15 08:21:03.039620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.275 [2024-05-15 08:21:03.122134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.275 [2024-05-15 08:21:03.122236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.275 [2024-05-15 08:21:03.122268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.275 [2024-05-15 08:21:03.122268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.843 08:21:03 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:16.843 08:21:03 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:09:16.843 08:21:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:16.843 08:21:03 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.843 08:21:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:16.843 POWER: Env isn't set yet! 00:09:16.843 POWER: Attempting to initialise ACPI cpufreq power management... 00:09:16.843 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:16.843 POWER: Cannot set governor of lcore 0 to userspace 00:09:16.843 POWER: Attempting to initialise PSTAT power management... 00:09:16.843 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:09:16.843 POWER: Initialized successfully for lcore 0 power management 00:09:16.843 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:09:16.843 POWER: Initialized successfully for lcore 1 power management 00:09:16.843 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:09:16.843 POWER: Initialized successfully for lcore 2 power management 00:09:16.843 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:09:16.843 POWER: Initialized successfully for lcore 3 power management 00:09:16.843 08:21:03 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.843 08:21:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:16.843 08:21:03 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.843 08:21:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:17.103 [2024-05-15 08:21:03.894296] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:17.103 08:21:03 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.103 08:21:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:17.103 08:21:03 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:17.103 08:21:03 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:17.103 08:21:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:17.103 ************************************ 00:09:17.103 START TEST scheduler_create_thread 00:09:17.103 ************************************ 00:09:17.103 08:21:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:09:17.103 08:21:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:17.103 08:21:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.103 08:21:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:17.103 2 00:09:17.103 08:21:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:17.104 3 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:17.104 4 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:17.104 5 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:17.104 6 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.104 08:21:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:17.104 7 00:09:17.104 08:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.104 08:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:17.104 08:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.104 08:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:17.104 8 00:09:17.104 08:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.104 08:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:17.104 08:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.104 08:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:17.104 9 00:09:17.104 08:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.104 08:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:17.104 08:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.104 08:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:17.104 10 00:09:17.104 08:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.104 08:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:17.104 08:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.104 08:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:18.483 08:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.483 08:21:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:18.483 08:21:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:18.483 08:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.483 08:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:19.419 08:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.419 08:21:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:19.419 08:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.419 08:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:20.354 08:21:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.354 08:21:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:20.354 08:21:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:20.354 08:21:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.354 08:21:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:20.922 08:21:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.922 00:09:20.922 real 0m3.892s 00:09:20.922 user 0m0.024s 00:09:20.922 sys 0m0.005s 00:09:20.922 08:21:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:20.922 08:21:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:20.922 ************************************ 00:09:20.922 END TEST scheduler_create_thread 00:09:20.922 ************************************ 00:09:20.922 08:21:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:20.922 08:21:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 142754 00:09:20.922 08:21:07 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 142754 ']' 00:09:20.922 08:21:07 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 142754 00:09:20.922 08:21:07 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:09:20.922 08:21:07 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:20.922 08:21:07 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 142754 00:09:20.922 08:21:07 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:09:20.922 08:21:07 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:09:20.922 08:21:07 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 142754' 00:09:20.922 killing process with pid 142754 00:09:20.922 08:21:07 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 142754 00:09:20.922 08:21:07 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 142754 00:09:21.491 [2024-05-15 08:21:08.205955] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:21.491 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:09:21.491 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:09:21.491 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:09:21.491 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:09:21.491 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:09:21.491 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:09:21.491 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:09:21.491 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:09:21.491 00:09:21.491 real 0m5.663s 00:09:21.491 user 0m12.207s 00:09:21.491 sys 0m0.367s 00:09:21.491 08:21:08 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:21.491 08:21:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:21.491 ************************************ 00:09:21.491 END TEST event_scheduler 00:09:21.491 ************************************ 00:09:21.750 08:21:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:21.750 08:21:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:21.750 08:21:08 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:21.750 08:21:08 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:21.750 08:21:08 event -- common/autotest_common.sh@10 -- # set +x 00:09:21.750 ************************************ 00:09:21.750 START TEST app_repeat 00:09:21.750 ************************************ 00:09:21.750 08:21:08 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:09:21.750 08:21:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.750 08:21:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:21.750 08:21:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:21.750 08:21:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:21.750 08:21:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:21.750 08:21:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:21.750 08:21:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:21.750 08:21:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=144117 00:09:21.750 08:21:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:21.750 08:21:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 144117' 00:09:21.750 Process app_repeat pid: 144117 00:09:21.750 08:21:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:21.750 08:21:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:21.750 spdk_app_start Round 0 00:09:21.750 08:21:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 144117 /var/tmp/spdk-nbd.sock 00:09:21.751 08:21:08 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:21.751 08:21:08 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 144117 ']' 00:09:21.751 08:21:08 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:21.751 08:21:08 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:21.751 08:21:08 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:21.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:21.751 08:21:08 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:21.751 08:21:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:21.751 [2024-05-15 08:21:08.612143] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:21.751 [2024-05-15 08:21:08.612220] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144117 ] 00:09:21.751 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.751 [2024-05-15 08:21:08.667857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:21.751 [2024-05-15 08:21:08.746972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.751 [2024-05-15 08:21:08.746976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.688 08:21:09 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:22.688 08:21:09 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:09:22.688 08:21:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:22.688 Malloc0 00:09:22.688 08:21:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:22.946 Malloc1 00:09:22.946 08:21:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:22.946 08:21:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.946 08:21:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:22.946 08:21:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:22.946 08:21:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:22.947 08:21:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:22.947 08:21:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:22.947 08:21:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.947 08:21:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:22.947 08:21:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:22.947 08:21:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:22.947 08:21:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:22.947 08:21:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:22.947 08:21:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:22.947 08:21:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:22.947 08:21:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:22.947 /dev/nbd0 00:09:23.206 08:21:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:23.206 08:21:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:23.206 08:21:09 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:09:23.206 08:21:09 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:09:23.206 08:21:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:23.206 08:21:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:23.206 08:21:09 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:09:23.206 08:21:09 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:09:23.206 08:21:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:23.206 08:21:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:23.206 08:21:09 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:23.206 1+0 records in 00:09:23.206 1+0 records out 00:09:23.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187207 s, 21.9 MB/s 00:09:23.206 08:21:09 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:23.206 08:21:10 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:09:23.206 08:21:10 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:23.206 08:21:10 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:23.206 08:21:10 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:09:23.206 08:21:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:23.206 08:21:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:23.206 08:21:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:23.206 /dev/nbd1 00:09:23.206 08:21:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:23.206 08:21:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:23.206 08:21:10 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:09:23.206 08:21:10 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:09:23.206 08:21:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:23.206 08:21:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:23.206 08:21:10 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:09:23.206 08:21:10 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:09:23.206 08:21:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:23.206 08:21:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:23.206 08:21:10 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:23.206 1+0 records in 00:09:23.206 1+0 records out 00:09:23.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000139349 s, 29.4 MB/s 00:09:23.206 08:21:10 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:23.206 08:21:10 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:09:23.206 08:21:10 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:23.206 08:21:10 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:23.206 08:21:10 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:09:23.206 08:21:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:23.206 08:21:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:23.206 08:21:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:23.206 08:21:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.206 08:21:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:23.465 { 00:09:23.465 "nbd_device": "/dev/nbd0", 00:09:23.465 "bdev_name": "Malloc0" 00:09:23.465 }, 00:09:23.465 { 00:09:23.465 "nbd_device": "/dev/nbd1", 00:09:23.465 "bdev_name": "Malloc1" 00:09:23.465 } 00:09:23.465 ]' 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:23.465 { 00:09:23.465 "nbd_device": "/dev/nbd0", 00:09:23.465 "bdev_name": "Malloc0" 00:09:23.465 }, 00:09:23.465 { 00:09:23.465 "nbd_device": "/dev/nbd1", 00:09:23.465 "bdev_name": "Malloc1" 00:09:23.465 } 00:09:23.465 ]' 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:23.465 /dev/nbd1' 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:23.465 /dev/nbd1' 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:23.465 256+0 records in 00:09:23.465 256+0 records out 00:09:23.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00963172 s, 109 MB/s 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:23.465 256+0 records in 00:09:23.465 256+0 records out 00:09:23.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133949 s, 78.3 MB/s 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:23.465 256+0 records in 00:09:23.465 256+0 records out 00:09:23.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148712 s, 70.5 MB/s 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:23.465 08:21:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.724 08:21:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:23.983 08:21:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:23.983 08:21:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:23.983 08:21:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:23.983 08:21:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.983 08:21:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.983 08:21:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:23.983 08:21:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:23.983 08:21:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.983 08:21:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:23.983 08:21:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.983 08:21:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:24.241 08:21:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:24.241 08:21:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:24.241 08:21:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:24.241 08:21:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:24.242 08:21:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:24.242 08:21:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:24.242 08:21:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:24.242 08:21:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:24.242 08:21:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:24.242 08:21:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:24.242 08:21:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:24.242 08:21:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:24.242 08:21:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:24.501 08:21:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:24.759 [2024-05-15 08:21:11.525703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:24.759 [2024-05-15 08:21:11.591974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.759 [2024-05-15 08:21:11.591978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.759 [2024-05-15 08:21:11.633678] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:24.759 [2024-05-15 08:21:11.633715] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:28.047 08:21:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:28.047 08:21:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:28.047 spdk_app_start Round 1 00:09:28.047 08:21:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 144117 /var/tmp/spdk-nbd.sock 00:09:28.047 08:21:14 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 144117 ']' 00:09:28.047 08:21:14 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:28.047 08:21:14 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:28.047 08:21:14 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:28.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:28.047 08:21:14 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:28.047 08:21:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:28.047 08:21:14 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:28.047 08:21:14 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:09:28.047 08:21:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:28.047 Malloc0 00:09:28.047 08:21:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:28.047 Malloc1 00:09:28.047 08:21:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:28.047 08:21:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.047 08:21:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:28.047 08:21:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:28.047 08:21:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.047 08:21:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:28.047 08:21:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:28.047 08:21:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.047 08:21:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:28.047 08:21:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:28.047 08:21:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.047 08:21:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:28.047 08:21:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:28.047 08:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:28.047 08:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:28.047 08:21:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:28.047 /dev/nbd0 00:09:28.047 08:21:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:28.047 08:21:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:28.047 08:21:15 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:09:28.047 08:21:15 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:09:28.047 08:21:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:28.047 08:21:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:28.047 08:21:15 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:09:28.047 08:21:15 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:09:28.047 08:21:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:28.047 08:21:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:28.048 08:21:15 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:28.048 1+0 records in 00:09:28.048 1+0 records out 00:09:28.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000101387 s, 40.4 MB/s 00:09:28.048 08:21:15 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:28.048 08:21:15 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:09:28.048 08:21:15 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:28.048 08:21:15 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:28.048 08:21:15 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:09:28.048 08:21:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.048 08:21:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:28.048 08:21:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:28.307 /dev/nbd1 00:09:28.307 08:21:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:28.307 08:21:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:28.307 08:21:15 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:09:28.307 08:21:15 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:09:28.307 08:21:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:28.307 08:21:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:28.307 08:21:15 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:09:28.307 08:21:15 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:09:28.307 08:21:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:28.307 08:21:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:28.307 08:21:15 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:28.307 1+0 records in 00:09:28.307 1+0 records out 00:09:28.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199492 s, 20.5 MB/s 00:09:28.307 08:21:15 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:28.307 08:21:15 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:09:28.307 08:21:15 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:28.307 08:21:15 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:28.307 08:21:15 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:09:28.307 08:21:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.307 08:21:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:28.307 08:21:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:28.307 08:21:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.307 08:21:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:28.568 { 00:09:28.568 "nbd_device": "/dev/nbd0", 00:09:28.568 "bdev_name": "Malloc0" 00:09:28.568 }, 00:09:28.568 { 00:09:28.568 "nbd_device": "/dev/nbd1", 00:09:28.568 "bdev_name": "Malloc1" 00:09:28.568 } 00:09:28.568 ]' 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:28.568 { 00:09:28.568 "nbd_device": "/dev/nbd0", 00:09:28.568 "bdev_name": "Malloc0" 00:09:28.568 }, 00:09:28.568 { 00:09:28.568 "nbd_device": "/dev/nbd1", 00:09:28.568 "bdev_name": "Malloc1" 00:09:28.568 } 00:09:28.568 ]' 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:28.568 /dev/nbd1' 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:28.568 /dev/nbd1' 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:28.568 256+0 records in 00:09:28.568 256+0 records out 00:09:28.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103118 s, 102 MB/s 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:28.568 256+0 records in 00:09:28.568 256+0 records out 00:09:28.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147263 s, 71.2 MB/s 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:28.568 256+0 records in 00:09:28.568 256+0 records out 00:09:28.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147488 s, 71.1 MB/s 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:28.568 08:21:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:28.569 08:21:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.569 08:21:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.569 08:21:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:28.569 08:21:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:28.569 08:21:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:28.569 08:21:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:28.828 08:21:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:28.828 08:21:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:28.828 08:21:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:28.828 08:21:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:28.828 08:21:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:28.828 08:21:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:28.828 08:21:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:28.828 08:21:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:28.828 08:21:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:28.828 08:21:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:29.087 08:21:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:29.087 08:21:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:29.087 08:21:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:29.087 08:21:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.087 08:21:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.087 08:21:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:29.087 08:21:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:29.087 08:21:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.087 08:21:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:29.087 08:21:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.087 08:21:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:29.087 08:21:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:29.087 08:21:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:29.087 08:21:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:29.347 08:21:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:29.347 08:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:29.347 08:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:29.347 08:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:29.347 08:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:29.347 08:21:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:29.347 08:21:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:29.347 08:21:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:29.347 08:21:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:29.347 08:21:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:29.347 08:21:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:29.606 [2024-05-15 08:21:16.542062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:29.606 [2024-05-15 08:21:16.607713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.606 [2024-05-15 08:21:16.607716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.865 [2024-05-15 08:21:16.650160] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:29.865 [2024-05-15 08:21:16.650200] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:32.402 08:21:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:32.402 08:21:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:32.402 spdk_app_start Round 2 00:09:32.402 08:21:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 144117 /var/tmp/spdk-nbd.sock 00:09:32.402 08:21:19 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 144117 ']' 00:09:32.402 08:21:19 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:32.402 08:21:19 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:32.402 08:21:19 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:32.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:32.402 08:21:19 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:32.402 08:21:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:32.662 08:21:19 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:32.662 08:21:19 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:09:32.662 08:21:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:32.662 Malloc0 00:09:32.921 08:21:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:32.921 Malloc1 00:09:32.921 08:21:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:32.921 08:21:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.921 08:21:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:32.921 08:21:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:32.921 08:21:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.921 08:21:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:32.921 08:21:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:32.922 08:21:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.922 08:21:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:32.922 08:21:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:32.922 08:21:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.922 08:21:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:32.922 08:21:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:32.922 08:21:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:32.922 08:21:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:32.922 08:21:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:33.181 /dev/nbd0 00:09:33.181 08:21:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:33.181 08:21:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:33.181 08:21:20 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:09:33.181 08:21:20 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:09:33.181 08:21:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:33.181 08:21:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:33.181 08:21:20 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:09:33.181 08:21:20 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:09:33.181 08:21:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:33.181 08:21:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:33.182 08:21:20 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:33.182 1+0 records in 00:09:33.182 1+0 records out 00:09:33.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019636 s, 20.9 MB/s 00:09:33.182 08:21:20 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:33.182 08:21:20 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:09:33.182 08:21:20 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:33.182 08:21:20 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:33.182 08:21:20 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:09:33.182 08:21:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:33.182 08:21:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:33.182 08:21:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:33.441 /dev/nbd1 00:09:33.441 08:21:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:33.441 08:21:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:33.441 08:21:20 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:09:33.441 08:21:20 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:09:33.441 08:21:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:33.441 08:21:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:33.441 08:21:20 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:09:33.441 08:21:20 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:09:33.441 08:21:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:33.441 08:21:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:33.441 08:21:20 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:33.441 1+0 records in 00:09:33.441 1+0 records out 00:09:33.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185333 s, 22.1 MB/s 00:09:33.441 08:21:20 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:33.441 08:21:20 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:09:33.442 08:21:20 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:33.442 08:21:20 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:33.442 08:21:20 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:09:33.442 08:21:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:33.442 08:21:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:33.442 08:21:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:33.442 08:21:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.442 08:21:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:33.442 08:21:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:33.442 { 00:09:33.442 "nbd_device": "/dev/nbd0", 00:09:33.442 "bdev_name": "Malloc0" 00:09:33.442 }, 00:09:33.442 { 00:09:33.442 "nbd_device": "/dev/nbd1", 00:09:33.442 "bdev_name": "Malloc1" 00:09:33.442 } 00:09:33.442 ]' 00:09:33.701 08:21:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:33.701 { 00:09:33.701 "nbd_device": "/dev/nbd0", 00:09:33.701 "bdev_name": "Malloc0" 00:09:33.701 }, 00:09:33.701 { 00:09:33.701 "nbd_device": "/dev/nbd1", 00:09:33.701 "bdev_name": "Malloc1" 00:09:33.701 } 00:09:33.701 ]' 00:09:33.701 08:21:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:33.701 08:21:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:33.701 /dev/nbd1' 00:09:33.701 08:21:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:33.701 08:21:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:33.701 /dev/nbd1' 00:09:33.701 08:21:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:33.701 08:21:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:33.701 08:21:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:33.701 08:21:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:33.701 08:21:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:33.701 08:21:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:33.702 256+0 records in 00:09:33.702 256+0 records out 00:09:33.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102029 s, 103 MB/s 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:33.702 256+0 records in 00:09:33.702 256+0 records out 00:09:33.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135038 s, 77.7 MB/s 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:33.702 256+0 records in 00:09:33.702 256+0 records out 00:09:33.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147914 s, 70.9 MB/s 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.702 08:21:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.961 08:21:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:33.962 08:21:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.962 08:21:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:34.221 08:21:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:34.221 08:21:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:34.221 08:21:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:34.221 08:21:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:34.221 08:21:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:34.221 08:21:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:34.221 08:21:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:34.221 08:21:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:34.221 08:21:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:34.221 08:21:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:34.221 08:21:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:34.221 08:21:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:34.221 08:21:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:34.480 08:21:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:34.740 [2024-05-15 08:21:21.602072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:34.740 [2024-05-15 08:21:21.667853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.740 [2024-05-15 08:21:21.667856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.740 [2024-05-15 08:21:21.709573] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:34.740 [2024-05-15 08:21:21.709613] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:38.031 08:21:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 144117 /var/tmp/spdk-nbd.sock 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 144117 ']' 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:38.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:09:38.031 08:21:24 event.app_repeat -- event/event.sh@39 -- # killprocess 144117 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 144117 ']' 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 144117 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 144117 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 144117' 00:09:38.031 killing process with pid 144117 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@965 -- # kill 144117 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@970 -- # wait 144117 00:09:38.031 spdk_app_start is called in Round 0. 00:09:38.031 Shutdown signal received, stop current app iteration 00:09:38.031 Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 reinitialization... 00:09:38.031 spdk_app_start is called in Round 1. 00:09:38.031 Shutdown signal received, stop current app iteration 00:09:38.031 Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 reinitialization... 00:09:38.031 spdk_app_start is called in Round 2. 00:09:38.031 Shutdown signal received, stop current app iteration 00:09:38.031 Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 reinitialization... 00:09:38.031 spdk_app_start is called in Round 3. 00:09:38.031 Shutdown signal received, stop current app iteration 00:09:38.031 08:21:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:38.031 08:21:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:38.031 00:09:38.031 real 0m16.219s 00:09:38.031 user 0m35.066s 00:09:38.031 sys 0m2.307s 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:38.031 08:21:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:38.031 ************************************ 00:09:38.031 END TEST app_repeat 00:09:38.031 ************************************ 00:09:38.031 08:21:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:38.031 08:21:24 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:38.031 08:21:24 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:38.031 08:21:24 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:38.031 08:21:24 event -- common/autotest_common.sh@10 -- # set +x 00:09:38.031 ************************************ 00:09:38.031 START TEST cpu_locks 00:09:38.031 ************************************ 00:09:38.031 08:21:24 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:38.031 * Looking for test storage... 00:09:38.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:38.031 08:21:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:38.031 08:21:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:38.031 08:21:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:38.031 08:21:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:38.031 08:21:24 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:38.031 08:21:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:38.031 08:21:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:38.031 ************************************ 00:09:38.031 START TEST default_locks 00:09:38.031 ************************************ 00:09:38.031 08:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:09:38.031 08:21:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=147110 00:09:38.031 08:21:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 147110 00:09:38.031 08:21:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:38.031 08:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 147110 ']' 00:09:38.031 08:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.031 08:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:38.031 08:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.031 08:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:38.031 08:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:38.031 [2024-05-15 08:21:25.021455] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:38.031 [2024-05-15 08:21:25.021495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147110 ] 00:09:38.031 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.290 [2024-05-15 08:21:25.076031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.290 [2024-05-15 08:21:25.147126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.858 08:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:38.858 08:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:09:38.858 08:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 147110 00:09:38.858 08:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 147110 00:09:38.858 08:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:39.427 lslocks: write error 00:09:39.427 08:21:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 147110 00:09:39.427 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 147110 ']' 00:09:39.427 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 147110 00:09:39.427 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:09:39.427 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:39.427 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 147110 00:09:39.427 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:39.427 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:39.427 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 147110' 00:09:39.427 killing process with pid 147110 00:09:39.427 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 147110 00:09:39.427 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 147110 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 147110 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 147110 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 147110 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 147110 ']' 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:39.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (147110) - No such process 00:09:39.687 ERROR: process (pid: 147110) is no longer running 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:39.687 00:09:39.687 real 0m1.586s 00:09:39.687 user 0m1.629s 00:09:39.687 sys 0m0.535s 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:39.687 08:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:39.687 ************************************ 00:09:39.687 END TEST default_locks 00:09:39.687 ************************************ 00:09:39.687 08:21:26 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:39.687 08:21:26 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:39.687 08:21:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:39.687 08:21:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:39.687 ************************************ 00:09:39.687 START TEST default_locks_via_rpc 00:09:39.687 ************************************ 00:09:39.687 08:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:09:39.687 08:21:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=147367 00:09:39.687 08:21:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 147367 00:09:39.687 08:21:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:39.687 08:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 147367 ']' 00:09:39.687 08:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.687 08:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:39.687 08:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.687 08:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:39.687 08:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.687 [2024-05-15 08:21:26.680347] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:39.687 [2024-05-15 08:21:26.680396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147367 ] 00:09:39.687 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.946 [2024-05-15 08:21:26.736840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.946 [2024-05-15 08:21:26.808683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.516 08:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:40.516 08:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:09:40.516 08:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:40.516 08:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.516 08:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.516 08:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.516 08:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:40.516 08:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:40.516 08:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:40.516 08:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:40.516 08:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:40.516 08:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.516 08:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.516 08:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.516 08:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 147367 00:09:40.516 08:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 147367 00:09:40.516 08:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:41.084 08:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 147367 00:09:41.084 08:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 147367 ']' 00:09:41.084 08:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 147367 00:09:41.084 08:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:09:41.084 08:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:41.084 08:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 147367 00:09:41.084 08:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:41.084 08:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:41.084 08:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 147367' 00:09:41.084 killing process with pid 147367 00:09:41.084 08:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 147367 00:09:41.084 08:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 147367 00:09:41.343 00:09:41.343 real 0m1.559s 00:09:41.343 user 0m1.620s 00:09:41.343 sys 0m0.510s 00:09:41.343 08:21:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:41.343 08:21:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.343 ************************************ 00:09:41.343 END TEST default_locks_via_rpc 00:09:41.343 ************************************ 00:09:41.343 08:21:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:41.343 08:21:28 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:41.343 08:21:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:41.343 08:21:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:41.343 ************************************ 00:09:41.343 START TEST non_locking_app_on_locked_coremask 00:09:41.343 ************************************ 00:09:41.343 08:21:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:09:41.343 08:21:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=147751 00:09:41.343 08:21:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 147751 /var/tmp/spdk.sock 00:09:41.343 08:21:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:41.343 08:21:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 147751 ']' 00:09:41.343 08:21:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.343 08:21:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:41.343 08:21:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.343 08:21:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:41.343 08:21:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:41.343 [2024-05-15 08:21:28.308697] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:41.343 [2024-05-15 08:21:28.308738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147751 ] 00:09:41.343 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.343 [2024-05-15 08:21:28.362034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.603 [2024-05-15 08:21:28.441477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.172 08:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:42.173 08:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:09:42.173 08:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:42.173 08:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=147861 00:09:42.173 08:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 147861 /var/tmp/spdk2.sock 00:09:42.173 08:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 147861 ']' 00:09:42.173 08:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:42.173 08:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:42.173 08:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:42.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:42.173 08:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:42.173 08:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:42.173 [2024-05-15 08:21:29.133598] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:42.173 [2024-05-15 08:21:29.133647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147861 ] 00:09:42.173 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.432 [2024-05-15 08:21:29.203083] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:42.432 [2024-05-15 08:21:29.203107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.432 [2024-05-15 08:21:29.353351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.001 08:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:43.001 08:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:09:43.001 08:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 147751 00:09:43.001 08:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 147751 00:09:43.001 08:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:43.570 lslocks: write error 00:09:43.570 08:21:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 147751 00:09:43.570 08:21:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 147751 ']' 00:09:43.570 08:21:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 147751 00:09:43.570 08:21:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:09:43.570 08:21:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:43.570 08:21:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 147751 00:09:43.570 08:21:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:43.570 08:21:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:43.570 08:21:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 147751' 00:09:43.570 killing process with pid 147751 00:09:43.570 08:21:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 147751 00:09:43.570 08:21:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 147751 00:09:44.139 08:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 147861 00:09:44.139 08:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 147861 ']' 00:09:44.139 08:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 147861 00:09:44.139 08:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:09:44.139 08:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:44.139 08:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 147861 00:09:44.398 08:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:44.398 08:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:44.398 08:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 147861' 00:09:44.398 killing process with pid 147861 00:09:44.398 08:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 147861 00:09:44.398 08:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 147861 00:09:44.658 00:09:44.658 real 0m3.257s 00:09:44.658 user 0m3.486s 00:09:44.658 sys 0m0.899s 00:09:44.658 08:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:44.658 08:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:44.658 ************************************ 00:09:44.658 END TEST non_locking_app_on_locked_coremask 00:09:44.658 ************************************ 00:09:44.658 08:21:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:44.658 08:21:31 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:44.658 08:21:31 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:44.658 08:21:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:44.658 ************************************ 00:09:44.658 START TEST locking_app_on_unlocked_coremask 00:09:44.658 ************************************ 00:09:44.658 08:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:09:44.658 08:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=148352 00:09:44.658 08:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 148352 /var/tmp/spdk.sock 00:09:44.658 08:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:44.658 08:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 148352 ']' 00:09:44.658 08:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.658 08:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:44.658 08:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.658 08:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:44.658 08:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:44.658 [2024-05-15 08:21:31.634144] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:44.658 [2024-05-15 08:21:31.634192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148352 ] 00:09:44.658 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.918 [2024-05-15 08:21:31.689011] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:44.918 [2024-05-15 08:21:31.689036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.918 [2024-05-15 08:21:31.758863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.487 08:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:45.487 08:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:09:45.487 08:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=148536 00:09:45.487 08:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 148536 /var/tmp/spdk2.sock 00:09:45.487 08:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:45.487 08:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 148536 ']' 00:09:45.487 08:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:45.487 08:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:45.487 08:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:45.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:45.487 08:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:45.487 08:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:45.487 [2024-05-15 08:21:32.474984] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:45.487 [2024-05-15 08:21:32.475031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148536 ] 00:09:45.487 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.747 [2024-05-15 08:21:32.550388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.747 [2024-05-15 08:21:32.694560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.324 08:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:46.324 08:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:09:46.324 08:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 148536 00:09:46.324 08:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 148536 00:09:46.324 08:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:46.896 lslocks: write error 00:09:46.896 08:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 148352 00:09:46.896 08:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 148352 ']' 00:09:46.896 08:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 148352 00:09:46.896 08:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:09:46.896 08:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:46.896 08:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 148352 00:09:46.896 08:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:46.896 08:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:46.896 08:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 148352' 00:09:46.896 killing process with pid 148352 00:09:46.896 08:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 148352 00:09:46.896 08:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 148352 00:09:47.465 08:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 148536 00:09:47.465 08:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 148536 ']' 00:09:47.465 08:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 148536 00:09:47.465 08:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:09:47.465 08:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:47.466 08:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 148536 00:09:47.466 08:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:47.466 08:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:47.466 08:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 148536' 00:09:47.466 killing process with pid 148536 00:09:47.466 08:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 148536 00:09:47.466 08:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 148536 00:09:47.725 00:09:47.725 real 0m3.157s 00:09:47.725 user 0m3.378s 00:09:47.725 sys 0m0.869s 00:09:47.725 08:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:47.725 08:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:47.725 ************************************ 00:09:47.725 END TEST locking_app_on_unlocked_coremask 00:09:47.725 ************************************ 00:09:47.985 08:21:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:47.985 08:21:34 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:47.985 08:21:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:47.985 08:21:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:47.985 ************************************ 00:09:47.985 START TEST locking_app_on_locked_coremask 00:09:47.985 ************************************ 00:09:47.985 08:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:09:47.985 08:21:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=148859 00:09:47.985 08:21:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 148859 /var/tmp/spdk.sock 00:09:47.985 08:21:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:47.985 08:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 148859 ']' 00:09:47.986 08:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.986 08:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:47.986 08:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.986 08:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:47.986 08:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:47.986 [2024-05-15 08:21:34.855380] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:47.986 [2024-05-15 08:21:34.855420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148859 ] 00:09:47.986 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.986 [2024-05-15 08:21:34.909249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.986 [2024-05-15 08:21:34.988277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.924 08:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:48.924 08:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:09:48.924 08:21:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=149084 00:09:48.924 08:21:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 149084 /var/tmp/spdk2.sock 00:09:48.924 08:21:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:48.924 08:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:09:48.924 08:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 149084 /var/tmp/spdk2.sock 00:09:48.924 08:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:09:48.924 08:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:48.924 08:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:09:48.924 08:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:48.924 08:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 149084 /var/tmp/spdk2.sock 00:09:48.924 08:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 149084 ']' 00:09:48.924 08:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:48.924 08:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:48.924 08:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:48.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:48.924 08:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:48.924 08:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:48.924 [2024-05-15 08:21:35.690680] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:48.924 [2024-05-15 08:21:35.690727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149084 ] 00:09:48.924 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.924 [2024-05-15 08:21:35.759478] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 148859 has claimed it. 00:09:48.924 [2024-05-15 08:21:35.759508] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:49.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (149084) - No such process 00:09:49.493 ERROR: process (pid: 149084) is no longer running 00:09:49.493 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:49.493 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:09:49.493 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:09:49.493 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:49.493 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:49.493 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:49.493 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 148859 00:09:49.493 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 148859 00:09:49.493 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:49.753 lslocks: write error 00:09:49.753 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 148859 00:09:49.753 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 148859 ']' 00:09:49.753 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 148859 00:09:49.753 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:09:49.753 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:49.753 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 148859 00:09:49.753 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:49.753 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:49.753 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 148859' 00:09:49.753 killing process with pid 148859 00:09:49.753 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 148859 00:09:49.753 08:21:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 148859 00:09:50.321 00:09:50.321 real 0m2.266s 00:09:50.321 user 0m2.499s 00:09:50.321 sys 0m0.592s 00:09:50.321 08:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:50.321 08:21:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:50.321 ************************************ 00:09:50.321 END TEST locking_app_on_locked_coremask 00:09:50.321 ************************************ 00:09:50.321 08:21:37 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:50.321 08:21:37 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:50.321 08:21:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:50.321 08:21:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:50.321 ************************************ 00:09:50.321 START TEST locking_overlapped_coremask 00:09:50.321 ************************************ 00:09:50.321 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:09:50.321 08:21:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:09:50.321 08:21:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=149352 00:09:50.322 08:21:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 149352 /var/tmp/spdk.sock 00:09:50.322 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 149352 ']' 00:09:50.322 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.322 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:50.322 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.322 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:50.322 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:50.322 [2024-05-15 08:21:37.170531] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:50.322 [2024-05-15 08:21:37.170569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149352 ] 00:09:50.322 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.322 [2024-05-15 08:21:37.223074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:50.322 [2024-05-15 08:21:37.304031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.322 [2024-05-15 08:21:37.304125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.322 [2024-05-15 08:21:37.304127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.259 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:51.259 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:09:51.259 08:21:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=149574 00:09:51.259 08:21:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 149574 /var/tmp/spdk2.sock 00:09:51.259 08:21:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:51.259 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:09:51.259 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 149574 /var/tmp/spdk2.sock 00:09:51.259 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:09:51.259 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:51.259 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:09:51.259 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:51.259 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 149574 /var/tmp/spdk2.sock 00:09:51.259 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 149574 ']' 00:09:51.259 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:51.259 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:51.259 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:51.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:51.259 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:51.259 08:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:51.259 [2024-05-15 08:21:38.039734] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:51.259 [2024-05-15 08:21:38.039783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149574 ] 00:09:51.260 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.260 [2024-05-15 08:21:38.116131] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 149352 has claimed it. 00:09:51.260 [2024-05-15 08:21:38.116168] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:51.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (149574) - No such process 00:09:51.828 ERROR: process (pid: 149574) is no longer running 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 149352 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 149352 ']' 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 149352 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 149352 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 149352' 00:09:51.828 killing process with pid 149352 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 149352 00:09:51.828 08:21:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 149352 00:09:52.088 00:09:52.088 real 0m1.914s 00:09:52.088 user 0m5.413s 00:09:52.088 sys 0m0.390s 00:09:52.088 08:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:52.088 08:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:52.088 ************************************ 00:09:52.088 END TEST locking_overlapped_coremask 00:09:52.088 ************************************ 00:09:52.088 08:21:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:52.088 08:21:39 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:52.088 08:21:39 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:52.088 08:21:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:52.348 ************************************ 00:09:52.348 START TEST locking_overlapped_coremask_via_rpc 00:09:52.348 ************************************ 00:09:52.348 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:09:52.348 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=149684 00:09:52.348 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 149684 /var/tmp/spdk.sock 00:09:52.348 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:52.348 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 149684 ']' 00:09:52.348 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.348 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:52.348 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.348 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:52.348 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.348 [2024-05-15 08:21:39.165959] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:52.348 [2024-05-15 08:21:39.165997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149684 ] 00:09:52.348 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.348 [2024-05-15 08:21:39.218184] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:52.348 [2024-05-15 08:21:39.218209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:52.348 [2024-05-15 08:21:39.299176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.348 [2024-05-15 08:21:39.299190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:52.348 [2024-05-15 08:21:39.299193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.286 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:53.286 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:09:53.286 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=149855 00:09:53.286 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 149855 /var/tmp/spdk2.sock 00:09:53.286 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:53.286 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 149855 ']' 00:09:53.286 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:53.286 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:53.286 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:53.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:53.286 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:53.286 08:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.286 [2024-05-15 08:21:40.030411] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:53.286 [2024-05-15 08:21:40.030464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149855 ] 00:09:53.286 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.286 [2024-05-15 08:21:40.108998] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:53.286 [2024-05-15 08:21:40.109027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:53.286 [2024-05-15 08:21:40.261665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.286 [2024-05-15 08:21:40.265210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.286 [2024-05-15 08:21:40.265211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:53.853 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:53.853 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:09:53.853 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:53.853 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.853 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.853 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.853 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:53.853 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:53.853 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:53.853 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:09:53.853 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:53.853 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:09:53.853 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:53.853 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:53.853 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.853 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.853 [2024-05-15 08:21:40.872240] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 149684 has claimed it. 00:09:54.111 request: 00:09:54.111 { 00:09:54.111 "method": "framework_enable_cpumask_locks", 00:09:54.111 "req_id": 1 00:09:54.111 } 00:09:54.111 Got JSON-RPC error response 00:09:54.111 response: 00:09:54.111 { 00:09:54.111 "code": -32603, 00:09:54.111 "message": "Failed to claim CPU core: 2" 00:09:54.111 } 00:09:54.111 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:54.111 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:54.111 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:54.111 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:54.111 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:54.111 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 149684 /var/tmp/spdk.sock 00:09:54.111 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 149684 ']' 00:09:54.111 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.111 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:54.111 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.111 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:54.111 08:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.111 08:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:54.111 08:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:09:54.111 08:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 149855 /var/tmp/spdk2.sock 00:09:54.111 08:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 149855 ']' 00:09:54.111 08:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:54.111 08:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:54.111 08:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:54.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:54.111 08:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:54.111 08:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.371 08:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:54.371 08:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:09:54.371 08:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:54.371 08:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:54.371 08:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:54.371 08:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:54.371 00:09:54.371 real 0m2.145s 00:09:54.371 user 0m0.898s 00:09:54.371 sys 0m0.165s 00:09:54.371 08:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:54.371 08:21:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.371 ************************************ 00:09:54.371 END TEST locking_overlapped_coremask_via_rpc 00:09:54.371 ************************************ 00:09:54.371 08:21:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:54.371 08:21:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 149684 ]] 00:09:54.371 08:21:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 149684 00:09:54.371 08:21:41 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 149684 ']' 00:09:54.371 08:21:41 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 149684 00:09:54.371 08:21:41 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:09:54.371 08:21:41 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:54.371 08:21:41 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 149684 00:09:54.371 08:21:41 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:54.371 08:21:41 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:54.371 08:21:41 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 149684' 00:09:54.371 killing process with pid 149684 00:09:54.371 08:21:41 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 149684 00:09:54.371 08:21:41 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 149684 00:09:54.940 08:21:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 149855 ]] 00:09:54.940 08:21:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 149855 00:09:54.940 08:21:41 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 149855 ']' 00:09:54.940 08:21:41 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 149855 00:09:54.940 08:21:41 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:09:54.940 08:21:41 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:54.940 08:21:41 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 149855 00:09:54.940 08:21:41 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:09:54.940 08:21:41 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:09:54.940 08:21:41 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 149855' 00:09:54.940 killing process with pid 149855 00:09:54.940 08:21:41 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 149855 00:09:54.940 08:21:41 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 149855 00:09:55.201 08:21:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:55.201 08:21:42 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:55.201 08:21:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 149684 ]] 00:09:55.201 08:21:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 149684 00:09:55.201 08:21:42 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 149684 ']' 00:09:55.201 08:21:42 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 149684 00:09:55.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (149684) - No such process 00:09:55.201 08:21:42 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 149684 is not found' 00:09:55.201 Process with pid 149684 is not found 00:09:55.201 08:21:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 149855 ]] 00:09:55.201 08:21:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 149855 00:09:55.201 08:21:42 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 149855 ']' 00:09:55.201 08:21:42 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 149855 00:09:55.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (149855) - No such process 00:09:55.201 08:21:42 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 149855 is not found' 00:09:55.201 Process with pid 149855 is not found 00:09:55.201 08:21:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:55.201 00:09:55.201 real 0m17.226s 00:09:55.201 user 0m29.784s 00:09:55.201 sys 0m4.827s 00:09:55.201 08:21:42 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:55.201 08:21:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:55.201 ************************************ 00:09:55.201 END TEST cpu_locks 00:09:55.201 ************************************ 00:09:55.201 00:09:55.201 real 0m43.401s 00:09:55.201 user 1m23.738s 00:09:55.201 sys 0m8.102s 00:09:55.201 08:21:42 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:55.201 08:21:42 event -- common/autotest_common.sh@10 -- # set +x 00:09:55.201 ************************************ 00:09:55.201 END TEST event 00:09:55.201 ************************************ 00:09:55.201 08:21:42 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:55.201 08:21:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:55.201 08:21:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:55.201 08:21:42 -- common/autotest_common.sh@10 -- # set +x 00:09:55.201 ************************************ 00:09:55.201 START TEST thread 00:09:55.201 ************************************ 00:09:55.201 08:21:42 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:55.460 * Looking for test storage... 00:09:55.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:09:55.460 08:21:42 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:55.460 08:21:42 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:09:55.460 08:21:42 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:55.460 08:21:42 thread -- common/autotest_common.sh@10 -- # set +x 00:09:55.460 ************************************ 00:09:55.460 START TEST thread_poller_perf 00:09:55.460 ************************************ 00:09:55.460 08:21:42 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:55.460 [2024-05-15 08:21:42.321383] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:55.460 [2024-05-15 08:21:42.321448] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150412 ] 00:09:55.460 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.460 [2024-05-15 08:21:42.378830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.460 [2024-05-15 08:21:42.453059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.460 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:56.837 ====================================== 00:09:56.837 busy:2310515868 (cyc) 00:09:56.837 total_run_count: 403000 00:09:56.837 tsc_hz: 2300000000 (cyc) 00:09:56.837 ====================================== 00:09:56.837 poller_cost: 5733 (cyc), 2492 (nsec) 00:09:56.837 00:09:56.837 real 0m1.254s 00:09:56.837 user 0m1.177s 00:09:56.837 sys 0m0.074s 00:09:56.837 08:21:43 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:56.837 08:21:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:56.837 ************************************ 00:09:56.837 END TEST thread_poller_perf 00:09:56.837 ************************************ 00:09:56.837 08:21:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:56.837 08:21:43 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:09:56.837 08:21:43 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:56.837 08:21:43 thread -- common/autotest_common.sh@10 -- # set +x 00:09:56.837 ************************************ 00:09:56.837 START TEST thread_poller_perf 00:09:56.837 ************************************ 00:09:56.837 08:21:43 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:56.837 [2024-05-15 08:21:43.645650] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:56.837 [2024-05-15 08:21:43.645716] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150658 ] 00:09:56.837 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.837 [2024-05-15 08:21:43.702767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.837 [2024-05-15 08:21:43.770856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.837 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:58.222 ====================================== 00:09:58.222 busy:2301751138 (cyc) 00:09:58.222 total_run_count: 5297000 00:09:58.222 tsc_hz: 2300000000 (cyc) 00:09:58.222 ====================================== 00:09:58.222 poller_cost: 434 (cyc), 188 (nsec) 00:09:58.222 00:09:58.222 real 0m1.241s 00:09:58.222 user 0m1.164s 00:09:58.222 sys 0m0.073s 00:09:58.222 08:21:44 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:58.222 08:21:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:58.222 ************************************ 00:09:58.222 END TEST thread_poller_perf 00:09:58.222 ************************************ 00:09:58.222 08:21:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:58.222 00:09:58.222 real 0m2.714s 00:09:58.222 user 0m2.432s 00:09:58.222 sys 0m0.289s 00:09:58.222 08:21:44 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:58.222 08:21:44 thread -- common/autotest_common.sh@10 -- # set +x 00:09:58.222 ************************************ 00:09:58.222 END TEST thread 00:09:58.222 ************************************ 00:09:58.222 08:21:44 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:09:58.222 08:21:44 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:58.222 08:21:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:58.222 08:21:44 -- common/autotest_common.sh@10 -- # set +x 00:09:58.222 ************************************ 00:09:58.222 START TEST accel 00:09:58.222 ************************************ 00:09:58.222 08:21:44 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:09:58.222 * Looking for test storage... 00:09:58.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:09:58.222 08:21:45 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:09:58.222 08:21:45 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:09:58.222 08:21:45 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:58.222 08:21:45 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=150946 00:09:58.222 08:21:45 accel -- accel/accel.sh@63 -- # waitforlisten 150946 00:09:58.222 08:21:45 accel -- common/autotest_common.sh@827 -- # '[' -z 150946 ']' 00:09:58.222 08:21:45 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.222 08:21:45 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:58.222 08:21:45 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:58.222 08:21:45 accel -- accel/accel.sh@61 -- # build_accel_config 00:09:58.222 08:21:45 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.222 08:21:45 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:58.222 08:21:45 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:58.222 08:21:45 accel -- common/autotest_common.sh@10 -- # set +x 00:09:58.222 08:21:45 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:58.222 08:21:45 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:58.222 08:21:45 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:58.222 08:21:45 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:58.222 08:21:45 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:58.222 08:21:45 accel -- accel/accel.sh@41 -- # jq -r . 00:09:58.222 [2024-05-15 08:21:45.115266] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:58.222 [2024-05-15 08:21:45.115314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150946 ] 00:09:58.222 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.222 [2024-05-15 08:21:45.169646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.480 [2024-05-15 08:21:45.250362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.049 08:21:45 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:59.049 08:21:45 accel -- common/autotest_common.sh@860 -- # return 0 00:09:59.049 08:21:45 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:09:59.049 08:21:45 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:09:59.049 08:21:45 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:09:59.049 08:21:45 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:09:59.049 08:21:45 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:59.049 08:21:45 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:09:59.049 08:21:45 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:59.049 08:21:45 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.049 08:21:45 accel -- common/autotest_common.sh@10 -- # set +x 00:09:59.049 08:21:45 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.049 08:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.049 08:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.049 08:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.049 08:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.049 08:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.049 08:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.049 08:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.049 08:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.049 08:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.049 08:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.049 08:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.049 08:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.049 08:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.049 08:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.049 08:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.049 08:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.049 08:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.049 08:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.049 08:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.049 08:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.049 08:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.049 08:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.049 08:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.049 08:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.049 08:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.049 08:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.049 08:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.049 08:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.049 08:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.049 08:21:45 accel -- accel/accel.sh@75 -- # killprocess 150946 00:09:59.049 08:21:45 accel -- common/autotest_common.sh@946 -- # '[' -z 150946 ']' 00:09:59.049 08:21:45 accel -- common/autotest_common.sh@950 -- # kill -0 150946 00:09:59.049 08:21:45 accel -- common/autotest_common.sh@951 -- # uname 00:09:59.049 08:21:45 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:59.049 08:21:45 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 150946 00:09:59.049 08:21:46 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:59.049 08:21:46 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:59.049 08:21:46 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 150946' 00:09:59.049 killing process with pid 150946 00:09:59.049 08:21:46 accel -- common/autotest_common.sh@965 -- # kill 150946 00:09:59.049 08:21:46 accel -- common/autotest_common.sh@970 -- # wait 150946 00:09:59.618 08:21:46 accel -- accel/accel.sh@76 -- # trap - ERR 00:09:59.618 08:21:46 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:09:59.618 08:21:46 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:59.618 08:21:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:59.618 08:21:46 accel -- common/autotest_common.sh@10 -- # set +x 00:09:59.618 08:21:46 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:09:59.618 08:21:46 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:59.618 08:21:46 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:09:59.618 08:21:46 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:59.618 08:21:46 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:59.618 08:21:46 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.618 08:21:46 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.618 08:21:46 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:59.618 08:21:46 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:09:59.618 08:21:46 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:09:59.618 08:21:46 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:59.618 08:21:46 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:09:59.618 08:21:46 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:59.618 08:21:46 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:59.618 08:21:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:59.618 08:21:46 accel -- common/autotest_common.sh@10 -- # set +x 00:09:59.618 ************************************ 00:09:59.618 START TEST accel_missing_filename 00:09:59.618 ************************************ 00:09:59.618 08:21:46 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:09:59.618 08:21:46 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:09:59.618 08:21:46 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:59.618 08:21:46 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:59.618 08:21:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:59.618 08:21:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:59.618 08:21:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:59.618 08:21:46 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:09:59.619 08:21:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:59.619 08:21:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:09:59.619 08:21:46 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:59.619 08:21:46 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:59.619 08:21:46 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.619 08:21:46 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.619 08:21:46 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:59.619 08:21:46 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:09:59.619 08:21:46 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:09:59.619 [2024-05-15 08:21:46.506576] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:09:59.619 [2024-05-15 08:21:46.506642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151219 ] 00:09:59.619 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.619 [2024-05-15 08:21:46.565302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.619 [2024-05-15 08:21:46.641225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.878 [2024-05-15 08:21:46.682066] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:59.878 [2024-05-15 08:21:46.741693] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:09:59.878 A filename is required. 00:09:59.878 08:21:46 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:09:59.878 08:21:46 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:59.878 08:21:46 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:09:59.878 08:21:46 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:09:59.878 08:21:46 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:09:59.878 08:21:46 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:59.878 00:09:59.878 real 0m0.361s 00:09:59.878 user 0m0.281s 00:09:59.878 sys 0m0.120s 00:09:59.878 08:21:46 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:59.878 08:21:46 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:09:59.878 ************************************ 00:09:59.878 END TEST accel_missing_filename 00:09:59.878 ************************************ 00:09:59.878 08:21:46 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:59.878 08:21:46 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:09:59.878 08:21:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:59.878 08:21:46 accel -- common/autotest_common.sh@10 -- # set +x 00:10:00.137 ************************************ 00:10:00.137 START TEST accel_compress_verify 00:10:00.137 ************************************ 00:10:00.137 08:21:46 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:00.137 08:21:46 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:10:00.137 08:21:46 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:00.137 08:21:46 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:00.137 08:21:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.137 08:21:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:00.137 08:21:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.138 08:21:46 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:00.138 08:21:46 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:00.138 08:21:46 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:10:00.138 08:21:46 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:00.138 08:21:46 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:00.138 08:21:46 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:00.138 08:21:46 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:00.138 08:21:46 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:00.138 08:21:46 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:10:00.138 08:21:46 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:10:00.138 [2024-05-15 08:21:46.933368] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:00.138 [2024-05-15 08:21:46.933434] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151240 ] 00:10:00.138 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.138 [2024-05-15 08:21:46.991089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.138 [2024-05-15 08:21:47.067006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.138 [2024-05-15 08:21:47.108144] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:00.398 [2024-05-15 08:21:47.168042] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:10:00.398 00:10:00.398 Compression does not support the verify option, aborting. 00:10:00.398 08:21:47 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:10:00.398 08:21:47 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:00.398 08:21:47 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:10:00.398 08:21:47 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:10:00.398 08:21:47 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:10:00.398 08:21:47 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:00.398 00:10:00.398 real 0m0.359s 00:10:00.398 user 0m0.283s 00:10:00.398 sys 0m0.115s 00:10:00.398 08:21:47 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:00.398 08:21:47 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:10:00.398 ************************************ 00:10:00.398 END TEST accel_compress_verify 00:10:00.398 ************************************ 00:10:00.398 08:21:47 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:00.398 08:21:47 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:10:00.398 08:21:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:00.398 08:21:47 accel -- common/autotest_common.sh@10 -- # set +x 00:10:00.398 ************************************ 00:10:00.398 START TEST accel_wrong_workload 00:10:00.398 ************************************ 00:10:00.398 08:21:47 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:10:00.398 08:21:47 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:10:00.398 08:21:47 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:00.398 08:21:47 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:00.398 08:21:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.398 08:21:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:00.398 08:21:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.398 08:21:47 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:10:00.398 08:21:47 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:00.398 08:21:47 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:10:00.398 08:21:47 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:00.398 08:21:47 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:00.398 08:21:47 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:00.398 08:21:47 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:00.398 08:21:47 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:00.398 08:21:47 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:10:00.398 08:21:47 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:10:00.398 Unsupported workload type: foobar 00:10:00.398 [2024-05-15 08:21:47.362364] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:00.398 accel_perf options: 00:10:00.398 [-h help message] 00:10:00.398 [-q queue depth per core] 00:10:00.398 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:00.398 [-T number of threads per core 00:10:00.398 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:00.398 [-t time in seconds] 00:10:00.398 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:00.398 [ dif_verify, , dif_generate, dif_generate_copy 00:10:00.398 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:00.398 [-l for compress/decompress workloads, name of uncompressed input file 00:10:00.398 [-S for crc32c workload, use this seed value (default 0) 00:10:00.398 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:00.398 [-f for fill workload, use this BYTE value (default 255) 00:10:00.398 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:00.398 [-y verify result if this switch is on] 00:10:00.398 [-a tasks to allocate per core (default: same value as -q)] 00:10:00.398 Can be used to spread operations across a wider range of memory. 00:10:00.398 08:21:47 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:10:00.398 08:21:47 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:00.398 08:21:47 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:00.398 08:21:47 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:00.398 00:10:00.398 real 0m0.032s 00:10:00.398 user 0m0.016s 00:10:00.398 sys 0m0.016s 00:10:00.398 08:21:47 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:00.398 08:21:47 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:10:00.398 ************************************ 00:10:00.398 END TEST accel_wrong_workload 00:10:00.398 ************************************ 00:10:00.398 Error: writing output failed: Broken pipe 00:10:00.398 08:21:47 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:00.398 08:21:47 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:10:00.398 08:21:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:00.398 08:21:47 accel -- common/autotest_common.sh@10 -- # set +x 00:10:00.658 ************************************ 00:10:00.658 START TEST accel_negative_buffers 00:10:00.658 ************************************ 00:10:00.658 08:21:47 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:00.658 08:21:47 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:10:00.658 08:21:47 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:00.658 08:21:47 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:00.658 08:21:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.658 08:21:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:00.658 08:21:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.658 08:21:47 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:10:00.658 08:21:47 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:00.658 08:21:47 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:10:00.658 08:21:47 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:00.658 08:21:47 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:00.658 08:21:47 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:00.658 08:21:47 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:00.658 08:21:47 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:00.658 08:21:47 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:10:00.658 08:21:47 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:10:00.658 -x option must be non-negative. 00:10:00.658 [2024-05-15 08:21:47.462419] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:00.658 accel_perf options: 00:10:00.658 [-h help message] 00:10:00.658 [-q queue depth per core] 00:10:00.658 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:00.658 [-T number of threads per core 00:10:00.658 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:00.658 [-t time in seconds] 00:10:00.658 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:00.658 [ dif_verify, , dif_generate, dif_generate_copy 00:10:00.658 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:00.658 [-l for compress/decompress workloads, name of uncompressed input file 00:10:00.658 [-S for crc32c workload, use this seed value (default 0) 00:10:00.658 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:00.658 [-f for fill workload, use this BYTE value (default 255) 00:10:00.658 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:00.658 [-y verify result if this switch is on] 00:10:00.658 [-a tasks to allocate per core (default: same value as -q)] 00:10:00.658 Can be used to spread operations across a wider range of memory. 00:10:00.658 08:21:47 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:10:00.658 08:21:47 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:00.658 08:21:47 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:00.658 08:21:47 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:00.658 00:10:00.658 real 0m0.032s 00:10:00.658 user 0m0.023s 00:10:00.658 sys 0m0.009s 00:10:00.658 08:21:47 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:00.658 08:21:47 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:10:00.658 ************************************ 00:10:00.658 END TEST accel_negative_buffers 00:10:00.658 ************************************ 00:10:00.658 Error: writing output failed: Broken pipe 00:10:00.658 08:21:47 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:00.658 08:21:47 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:10:00.658 08:21:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:00.658 08:21:47 accel -- common/autotest_common.sh@10 -- # set +x 00:10:00.658 ************************************ 00:10:00.658 START TEST accel_crc32c 00:10:00.658 ************************************ 00:10:00.658 08:21:47 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:00.658 08:21:47 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:10:00.658 08:21:47 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:10:00.658 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:00.658 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:00.658 08:21:47 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:00.658 08:21:47 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:00.658 08:21:47 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:10:00.658 08:21:47 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:00.658 08:21:47 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:00.658 08:21:47 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:00.658 08:21:47 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:00.658 08:21:47 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:00.658 08:21:47 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:10:00.658 08:21:47 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:10:00.658 [2024-05-15 08:21:47.566196] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:00.659 [2024-05-15 08:21:47.566261] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151491 ] 00:10:00.659 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.659 [2024-05-15 08:21:47.620853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.918 [2024-05-15 08:21:47.695144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.918 08:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:00.918 08:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:00.919 08:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:02.298 08:21:48 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:02.298 00:10:02.298 real 0m1.360s 00:10:02.298 user 0m1.263s 00:10:02.298 sys 0m0.110s 00:10:02.298 08:21:48 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:02.298 08:21:48 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:10:02.298 ************************************ 00:10:02.298 END TEST accel_crc32c 00:10:02.298 ************************************ 00:10:02.298 08:21:48 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:02.298 08:21:48 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:10:02.298 08:21:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:02.298 08:21:48 accel -- common/autotest_common.sh@10 -- # set +x 00:10:02.298 ************************************ 00:10:02.298 START TEST accel_crc32c_C2 00:10:02.298 ************************************ 00:10:02.298 08:21:48 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:02.298 08:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:10:02.298 08:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:10:02.298 08:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:02.298 08:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:02.298 08:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:10:02.298 08:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:02.298 08:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:02.298 08:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:02.298 08:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:02.298 08:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:02.298 08:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:10:02.298 08:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:10:02.298 [2024-05-15 08:21:48.996279] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:02.298 [2024-05-15 08:21:48.996335] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151771 ] 00:10:02.298 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.298 [2024-05-15 08:21:49.053310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.298 [2024-05-15 08:21:49.126771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.298 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.299 08:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:03.677 00:10:03.677 real 0m1.363s 00:10:03.677 user 0m1.258s 00:10:03.677 sys 0m0.118s 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:03.677 08:21:50 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:10:03.677 ************************************ 00:10:03.677 END TEST accel_crc32c_C2 00:10:03.677 ************************************ 00:10:03.677 08:21:50 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:03.677 08:21:50 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:10:03.677 08:21:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:03.677 08:21:50 accel -- common/autotest_common.sh@10 -- # set +x 00:10:03.677 ************************************ 00:10:03.677 START TEST accel_copy 00:10:03.677 ************************************ 00:10:03.677 08:21:50 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:10:03.677 [2024-05-15 08:21:50.423593] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:03.677 [2024-05-15 08:21:50.423652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152025 ] 00:10:03.677 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.677 [2024-05-15 08:21:50.480514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.677 [2024-05-15 08:21:50.555171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.677 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.678 08:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:03.678 08:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.678 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.678 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.678 08:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:10:03.678 08:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.678 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.678 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.678 08:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:03.678 08:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.678 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.678 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.678 08:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:03.678 08:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.678 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.678 08:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:10:05.054 08:21:51 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:05.054 00:10:05.054 real 0m1.362s 00:10:05.054 user 0m1.260s 00:10:05.054 sys 0m0.114s 00:10:05.054 08:21:51 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:05.054 08:21:51 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:10:05.054 ************************************ 00:10:05.054 END TEST accel_copy 00:10:05.054 ************************************ 00:10:05.054 08:21:51 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:05.054 08:21:51 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:10:05.054 08:21:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:05.054 08:21:51 accel -- common/autotest_common.sh@10 -- # set +x 00:10:05.054 ************************************ 00:10:05.054 START TEST accel_fill 00:10:05.054 ************************************ 00:10:05.054 08:21:51 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:05.054 08:21:51 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:10:05.054 08:21:51 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:10:05.054 08:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:51 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:05.054 08:21:51 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:05.054 08:21:51 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:10:05.054 08:21:51 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:05.054 08:21:51 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:05.054 08:21:51 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:05.054 08:21:51 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:05.054 08:21:51 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:05.054 08:21:51 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:10:05.054 08:21:51 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:10:05.054 [2024-05-15 08:21:51.842377] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:05.054 [2024-05-15 08:21:51.842424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152275 ] 00:10:05.054 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.054 [2024-05-15 08:21:51.896145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.054 [2024-05-15 08:21:51.968788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.054 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.055 08:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:06.431 08:21:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:06.432 08:21:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:06.432 08:21:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:10:06.432 08:21:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:06.432 00:10:06.432 real 0m1.356s 00:10:06.432 user 0m1.258s 00:10:06.432 sys 0m0.111s 00:10:06.432 08:21:53 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:06.432 08:21:53 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:10:06.432 ************************************ 00:10:06.432 END TEST accel_fill 00:10:06.432 ************************************ 00:10:06.432 08:21:53 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:06.432 08:21:53 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:10:06.432 08:21:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:06.432 08:21:53 accel -- common/autotest_common.sh@10 -- # set +x 00:10:06.432 ************************************ 00:10:06.432 START TEST accel_copy_crc32c 00:10:06.432 ************************************ 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:10:06.432 [2024-05-15 08:21:53.268340] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:06.432 [2024-05-15 08:21:53.268387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152524 ] 00:10:06.432 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.432 [2024-05-15 08:21:53.321965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.432 [2024-05-15 08:21:53.393551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:06.432 08:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:07.808 00:10:07.808 real 0m1.354s 00:10:07.808 user 0m1.257s 00:10:07.808 sys 0m0.112s 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:07.808 08:21:54 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:10:07.808 ************************************ 00:10:07.808 END TEST accel_copy_crc32c 00:10:07.808 ************************************ 00:10:07.808 08:21:54 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:07.808 08:21:54 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:10:07.808 08:21:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:07.808 08:21:54 accel -- common/autotest_common.sh@10 -- # set +x 00:10:07.808 ************************************ 00:10:07.808 START TEST accel_copy_crc32c_C2 00:10:07.808 ************************************ 00:10:07.808 08:21:54 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:07.808 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:10:07.808 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:10:07.808 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:07.808 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:07.808 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:07.808 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:07.808 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:10:07.808 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:07.808 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:07.808 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:07.808 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:07.808 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:07.808 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:10:07.808 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:10:07.808 [2024-05-15 08:21:54.692117] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:07.808 [2024-05-15 08:21:54.692190] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152777 ] 00:10:07.808 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.808 [2024-05-15 08:21:54.747794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.808 [2024-05-15 08:21:54.818950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:08.067 08:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:09.002 00:10:09.002 real 0m1.358s 00:10:09.002 user 0m1.258s 00:10:09.002 sys 0m0.114s 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:09.002 08:21:56 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:10:09.002 ************************************ 00:10:09.002 END TEST accel_copy_crc32c_C2 00:10:09.002 ************************************ 00:10:09.261 08:21:56 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:09.261 08:21:56 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:10:09.261 08:21:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:09.261 08:21:56 accel -- common/autotest_common.sh@10 -- # set +x 00:10:09.261 ************************************ 00:10:09.261 START TEST accel_dualcast 00:10:09.261 ************************************ 00:10:09.261 08:21:56 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:10:09.261 [2024-05-15 08:21:56.107446] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:09.261 [2024-05-15 08:21:56.107493] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153026 ] 00:10:09.261 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.261 [2024-05-15 08:21:56.162658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.261 [2024-05-15 08:21:56.234584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:09.261 08:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:09.262 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:09.262 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:09.262 08:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:10:09.262 08:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:09.262 08:21:56 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:10:09.262 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:09.520 08:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:10.456 08:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:10.457 08:21:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:10.457 08:21:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:10.457 08:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:10.457 08:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:10.457 08:21:57 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:10.457 08:21:57 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:10:10.457 08:21:57 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:10.457 00:10:10.457 real 0m1.357s 00:10:10.457 user 0m1.256s 00:10:10.457 sys 0m0.114s 00:10:10.457 08:21:57 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:10.457 08:21:57 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:10:10.457 ************************************ 00:10:10.457 END TEST accel_dualcast 00:10:10.457 ************************************ 00:10:10.457 08:21:57 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:10.457 08:21:57 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:10:10.457 08:21:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:10.457 08:21:57 accel -- common/autotest_common.sh@10 -- # set +x 00:10:10.717 ************************************ 00:10:10.717 START TEST accel_compare 00:10:10.717 ************************************ 00:10:10.717 08:21:57 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:10:10.717 [2024-05-15 08:21:57.533106] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:10.717 [2024-05-15 08:21:57.533179] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153279 ] 00:10:10.717 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.717 [2024-05-15 08:21:57.587703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.717 [2024-05-15 08:21:57.659823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:10.717 08:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:10:12.097 08:21:58 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:12.097 00:10:12.097 real 0m1.356s 00:10:12.097 user 0m1.255s 00:10:12.097 sys 0m0.114s 00:10:12.097 08:21:58 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:12.097 08:21:58 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:10:12.097 ************************************ 00:10:12.097 END TEST accel_compare 00:10:12.097 ************************************ 00:10:12.097 08:21:58 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:12.097 08:21:58 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:10:12.097 08:21:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:12.097 08:21:58 accel -- common/autotest_common.sh@10 -- # set +x 00:10:12.097 ************************************ 00:10:12.097 START TEST accel_xor 00:10:12.097 ************************************ 00:10:12.097 08:21:58 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:10:12.097 08:21:58 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:10:12.097 08:21:58 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:10:12.097 08:21:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:12.097 08:21:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:12.097 08:21:58 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:12.097 08:21:58 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:12.097 08:21:58 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:10:12.097 08:21:58 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:12.097 08:21:58 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:12.097 08:21:58 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:12.097 08:21:58 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:12.097 08:21:58 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:12.097 08:21:58 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:10:12.097 08:21:58 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:10:12.097 [2024-05-15 08:21:58.945098] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:12.097 [2024-05-15 08:21:58.945159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153524 ] 00:10:12.097 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.097 [2024-05-15 08:21:59.000897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.097 [2024-05-15 08:21:59.072841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.097 08:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:12.097 08:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:12.097 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:12.097 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:12.097 08:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:12.097 08:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:12.097 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:12.097 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:12.098 08:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:10:12.098 08:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:12.098 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:12.098 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:12.098 08:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:12.098 08:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:12.098 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:12.098 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:12.098 08:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:12.098 08:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:12.098 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:12.098 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:12.098 08:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:10:12.098 08:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:12.098 08:21:59 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:10:12.098 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:12.098 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:12.098 08:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:10:12.356 08:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:12.356 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:12.356 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:12.356 08:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:12.356 08:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:12.356 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:12.356 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:12.356 08:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:12.356 08:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:12.356 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:12.356 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:12.356 08:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:10:12.356 08:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:12.356 08:21:59 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:10:12.356 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:12.356 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:12.356 08:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:12.356 08:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:12.357 08:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:13.294 08:22:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:13.294 00:10:13.294 real 0m1.360s 00:10:13.294 user 0m1.255s 00:10:13.294 sys 0m0.119s 00:10:13.294 08:22:00 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:13.294 08:22:00 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:10:13.294 ************************************ 00:10:13.294 END TEST accel_xor 00:10:13.294 ************************************ 00:10:13.294 08:22:00 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:13.294 08:22:00 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:10:13.294 08:22:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:13.294 08:22:00 accel -- common/autotest_common.sh@10 -- # set +x 00:10:13.554 ************************************ 00:10:13.554 START TEST accel_xor 00:10:13.554 ************************************ 00:10:13.554 08:22:00 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:10:13.554 [2024-05-15 08:22:00.360003] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:13.554 [2024-05-15 08:22:00.360062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153774 ] 00:10:13.554 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.554 [2024-05-15 08:22:00.416180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.554 [2024-05-15 08:22:00.490505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:13.554 08:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:14.948 08:22:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:14.948 00:10:14.948 real 0m1.360s 00:10:14.948 user 0m1.258s 00:10:14.948 sys 0m0.115s 00:10:14.948 08:22:01 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:14.948 08:22:01 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:10:14.948 ************************************ 00:10:14.948 END TEST accel_xor 00:10:14.948 ************************************ 00:10:14.948 08:22:01 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:14.948 08:22:01 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:10:14.948 08:22:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:14.948 08:22:01 accel -- common/autotest_common.sh@10 -- # set +x 00:10:14.948 ************************************ 00:10:14.948 START TEST accel_dif_verify 00:10:14.948 ************************************ 00:10:14.948 08:22:01 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:10:14.948 [2024-05-15 08:22:01.789593] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:14.948 [2024-05-15 08:22:01.789659] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154025 ] 00:10:14.948 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.948 [2024-05-15 08:22:01.845905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.948 [2024-05-15 08:22:01.915302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:14.948 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:14.949 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:10:15.208 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:15.208 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:15.208 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:15.209 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:10:15.209 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:15.209 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:15.209 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:15.209 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:15.209 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:15.209 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:15.209 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:15.209 08:22:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:15.209 08:22:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:15.209 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:15.209 08:22:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:10:16.180 08:22:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:16.180 00:10:16.180 real 0m1.357s 00:10:16.180 user 0m1.260s 00:10:16.180 sys 0m0.112s 00:10:16.180 08:22:03 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:16.180 08:22:03 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:10:16.180 ************************************ 00:10:16.180 END TEST accel_dif_verify 00:10:16.180 ************************************ 00:10:16.180 08:22:03 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:16.180 08:22:03 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:10:16.180 08:22:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:16.180 08:22:03 accel -- common/autotest_common.sh@10 -- # set +x 00:10:16.180 ************************************ 00:10:16.180 START TEST accel_dif_generate 00:10:16.180 ************************************ 00:10:16.180 08:22:03 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:10:16.180 08:22:03 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:10:16.180 08:22:03 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:10:16.180 08:22:03 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:16.180 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.180 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.180 08:22:03 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:16.180 08:22:03 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:10:16.180 08:22:03 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:16.180 08:22:03 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:16.180 08:22:03 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.180 08:22:03 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.180 08:22:03 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:16.180 08:22:03 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:10:16.180 08:22:03 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:10:16.441 [2024-05-15 08:22:03.213604] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:16.441 [2024-05-15 08:22:03.213671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154272 ] 00:10:16.441 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.441 [2024-05-15 08:22:03.269717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.441 [2024-05-15 08:22:03.341322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.441 08:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:17.821 08:22:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:17.821 08:22:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:17.821 08:22:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:10:17.822 08:22:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:17.822 00:10:17.822 real 0m1.357s 00:10:17.822 user 0m1.261s 00:10:17.822 sys 0m0.112s 00:10:17.822 08:22:04 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:17.822 08:22:04 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:10:17.822 ************************************ 00:10:17.822 END TEST accel_dif_generate 00:10:17.822 ************************************ 00:10:17.822 08:22:04 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:17.822 08:22:04 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:10:17.822 08:22:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:17.822 08:22:04 accel -- common/autotest_common.sh@10 -- # set +x 00:10:17.822 ************************************ 00:10:17.822 START TEST accel_dif_generate_copy 00:10:17.822 ************************************ 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:10:17.822 [2024-05-15 08:22:04.635304] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:17.822 [2024-05-15 08:22:04.635357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154530 ] 00:10:17.822 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.822 [2024-05-15 08:22:04.689497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.822 [2024-05-15 08:22:04.762348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:17.822 08:22:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:19.201 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:19.201 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:19.201 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:19.201 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:19.201 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:19.201 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:19.201 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:19.201 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:19.201 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:19.201 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:19.201 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:19.202 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:19.202 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:19.202 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:19.202 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:19.202 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:19.202 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:19.202 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:19.202 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:19.202 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:19.202 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:19.202 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:19.202 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:19.202 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:19.202 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:19.202 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:10:19.202 08:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:19.202 00:10:19.202 real 0m1.354s 00:10:19.202 user 0m1.252s 00:10:19.202 sys 0m0.114s 00:10:19.202 08:22:05 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:19.202 08:22:05 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:10:19.202 ************************************ 00:10:19.202 END TEST accel_dif_generate_copy 00:10:19.202 ************************************ 00:10:19.202 08:22:05 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:10:19.202 08:22:05 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:19.202 08:22:05 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:10:19.202 08:22:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:19.202 08:22:05 accel -- common/autotest_common.sh@10 -- # set +x 00:10:19.202 ************************************ 00:10:19.202 START TEST accel_comp 00:10:19.202 ************************************ 00:10:19.202 08:22:06 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:10:19.202 [2024-05-15 08:22:06.047335] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:19.202 [2024-05-15 08:22:06.047380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154778 ] 00:10:19.202 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.202 [2024-05-15 08:22:06.101268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.202 [2024-05-15 08:22:06.173565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.202 08:22:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:19.462 08:22:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.462 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.463 08:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:20.402 08:22:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:20.402 00:10:20.402 real 0m1.357s 00:10:20.402 user 0m1.257s 00:10:20.402 sys 0m0.114s 00:10:20.402 08:22:07 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:20.402 08:22:07 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:10:20.402 ************************************ 00:10:20.402 END TEST accel_comp 00:10:20.402 ************************************ 00:10:20.402 08:22:07 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:20.402 08:22:07 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:10:20.402 08:22:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:20.402 08:22:07 accel -- common/autotest_common.sh@10 -- # set +x 00:10:20.663 ************************************ 00:10:20.663 START TEST accel_decomp 00:10:20.663 ************************************ 00:10:20.663 08:22:07 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:10:20.663 [2024-05-15 08:22:07.472998] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:20.663 [2024-05-15 08:22:07.473062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155023 ] 00:10:20.663 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.663 [2024-05-15 08:22:07.529827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.663 [2024-05-15 08:22:07.601010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.663 08:22:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:22.043 08:22:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:22.043 00:10:22.043 real 0m1.363s 00:10:22.043 user 0m1.257s 00:10:22.043 sys 0m0.118s 00:10:22.043 08:22:08 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:22.043 08:22:08 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:10:22.043 ************************************ 00:10:22.043 END TEST accel_decomp 00:10:22.043 ************************************ 00:10:22.043 08:22:08 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:22.043 08:22:08 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:10:22.043 08:22:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:22.043 08:22:08 accel -- common/autotest_common.sh@10 -- # set +x 00:10:22.043 ************************************ 00:10:22.043 START TEST accel_decmop_full 00:10:22.043 ************************************ 00:10:22.043 08:22:08 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:22.043 08:22:08 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:10:22.043 08:22:08 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:10:22.043 08:22:08 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:22.043 08:22:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.043 08:22:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.043 08:22:08 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:22.043 08:22:08 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:10:22.043 08:22:08 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:22.043 08:22:08 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:22.043 08:22:08 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:22.043 08:22:08 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:22.043 08:22:08 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:22.043 08:22:08 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:10:22.043 08:22:08 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:10:22.043 [2024-05-15 08:22:08.903559] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:22.044 [2024-05-15 08:22:08.903618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155278 ] 00:10:22.044 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.044 [2024-05-15 08:22:08.960937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.044 [2024-05-15 08:22:09.033160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.303 08:22:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.239 08:22:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.240 08:22:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:23.240 08:22:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.240 08:22:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.240 08:22:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.240 08:22:10 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:23.240 08:22:10 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:23.240 08:22:10 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:23.240 00:10:23.240 real 0m1.372s 00:10:23.240 user 0m1.268s 00:10:23.240 sys 0m0.118s 00:10:23.240 08:22:10 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:23.240 08:22:10 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:10:23.240 ************************************ 00:10:23.240 END TEST accel_decmop_full 00:10:23.240 ************************************ 00:10:23.499 08:22:10 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:23.499 08:22:10 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:10:23.499 08:22:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:23.499 08:22:10 accel -- common/autotest_common.sh@10 -- # set +x 00:10:23.499 ************************************ 00:10:23.499 START TEST accel_decomp_mcore 00:10:23.499 ************************************ 00:10:23.499 08:22:10 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:23.499 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:23.499 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:23.499 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.499 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:23.499 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.499 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:23.499 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:23.499 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:23.499 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:23.499 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.499 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.499 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:23.499 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:23.499 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:23.499 [2024-05-15 08:22:10.337842] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:23.499 [2024-05-15 08:22:10.337895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155524 ] 00:10:23.499 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.499 [2024-05-15 08:22:10.392946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.499 [2024-05-15 08:22:10.468913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.499 [2024-05-15 08:22:10.469011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.499 [2024-05-15 08:22:10.469097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.500 [2024-05-15 08:22:10.469099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.500 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.759 08:22:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:24.696 00:10:24.696 real 0m1.370s 00:10:24.696 user 0m4.594s 00:10:24.696 sys 0m0.122s 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:24.696 08:22:11 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:24.696 ************************************ 00:10:24.696 END TEST accel_decomp_mcore 00:10:24.696 ************************************ 00:10:24.696 08:22:11 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:24.696 08:22:11 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:10:24.955 08:22:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:24.955 08:22:11 accel -- common/autotest_common.sh@10 -- # set +x 00:10:24.955 ************************************ 00:10:24.955 START TEST accel_decomp_full_mcore 00:10:24.955 ************************************ 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:24.955 [2024-05-15 08:22:11.779495] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:24.955 [2024-05-15 08:22:11.779541] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155779 ] 00:10:24.955 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.955 [2024-05-15 08:22:11.833838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:24.955 [2024-05-15 08:22:11.909005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.955 [2024-05-15 08:22:11.909101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.955 [2024-05-15 08:22:11.909173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.955 [2024-05-15 08:22:11.909174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.955 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.956 08:22:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:26.328 00:10:26.328 real 0m1.381s 00:10:26.328 user 0m4.636s 00:10:26.328 sys 0m0.122s 00:10:26.328 08:22:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:26.329 08:22:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:26.329 ************************************ 00:10:26.329 END TEST accel_decomp_full_mcore 00:10:26.329 ************************************ 00:10:26.329 08:22:13 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:26.329 08:22:13 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:10:26.329 08:22:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:26.329 08:22:13 accel -- common/autotest_common.sh@10 -- # set +x 00:10:26.329 ************************************ 00:10:26.329 START TEST accel_decomp_mthread 00:10:26.329 ************************************ 00:10:26.329 08:22:13 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:26.329 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:26.329 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:26.329 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:26.329 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.329 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:26.329 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:26.329 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:26.329 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:26.329 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:26.329 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:26.329 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:26.329 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:26.329 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:26.329 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:26.329 [2024-05-15 08:22:13.231417] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:26.329 [2024-05-15 08:22:13.231483] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156030 ] 00:10:26.329 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.329 [2024-05-15 08:22:13.288235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.586 [2024-05-15 08:22:13.362509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:26.586 08:22:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:27.955 00:10:27.955 real 0m1.368s 00:10:27.955 user 0m1.261s 00:10:27.955 sys 0m0.120s 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:27.955 08:22:14 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:27.955 ************************************ 00:10:27.955 END TEST accel_decomp_mthread 00:10:27.955 ************************************ 00:10:27.955 08:22:14 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:27.955 08:22:14 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:10:27.955 08:22:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:27.955 08:22:14 accel -- common/autotest_common.sh@10 -- # set +x 00:10:27.955 ************************************ 00:10:27.955 START TEST accel_decomp_full_mthread 00:10:27.955 ************************************ 00:10:27.955 08:22:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:27.955 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:27.955 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:27.955 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.955 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.955 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:27.955 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:27.955 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:27.955 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:27.955 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:27.955 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.955 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.955 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:27.955 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:27.955 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:27.955 [2024-05-15 08:22:14.665866] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:27.955 [2024-05-15 08:22:14.665930] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156280 ] 00:10:27.955 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.955 [2024-05-15 08:22:14.722694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.955 [2024-05-15 08:22:14.796038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.956 08:22:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:29.329 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:29.329 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:29.329 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:29.329 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:29.329 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:29.329 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:29.330 00:10:29.330 real 0m1.386s 00:10:29.330 user 0m1.275s 00:10:29.330 sys 0m0.124s 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:29.330 08:22:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:29.330 ************************************ 00:10:29.330 END TEST accel_decomp_full_mthread 00:10:29.330 ************************************ 00:10:29.330 08:22:16 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:10:29.330 08:22:16 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:29.330 08:22:16 accel -- accel/accel.sh@137 -- # build_accel_config 00:10:29.330 08:22:16 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:29.330 08:22:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:29.330 08:22:16 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:29.330 08:22:16 accel -- common/autotest_common.sh@10 -- # set +x 00:10:29.330 08:22:16 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:29.330 08:22:16 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.330 08:22:16 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.330 08:22:16 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:29.330 08:22:16 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:29.330 08:22:16 accel -- accel/accel.sh@41 -- # jq -r . 00:10:29.330 ************************************ 00:10:29.330 START TEST accel_dif_functional_tests 00:10:29.330 ************************************ 00:10:29.330 08:22:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:29.330 [2024-05-15 08:22:16.136115] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:29.330 [2024-05-15 08:22:16.136153] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156537 ] 00:10:29.330 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.330 [2024-05-15 08:22:16.188812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:29.330 [2024-05-15 08:22:16.263590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.330 [2024-05-15 08:22:16.263685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.330 [2024-05-15 08:22:16.263687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.330 00:10:29.330 00:10:29.330 CUnit - A unit testing framework for C - Version 2.1-3 00:10:29.330 http://cunit.sourceforge.net/ 00:10:29.330 00:10:29.330 00:10:29.330 Suite: accel_dif 00:10:29.330 Test: verify: DIF generated, GUARD check ...passed 00:10:29.330 Test: verify: DIF generated, APPTAG check ...passed 00:10:29.330 Test: verify: DIF generated, REFTAG check ...passed 00:10:29.330 Test: verify: DIF not generated, GUARD check ...[2024-05-15 08:22:16.332626] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:29.330 [2024-05-15 08:22:16.332669] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:29.330 passed 00:10:29.330 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 08:22:16.332697] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:29.330 [2024-05-15 08:22:16.332711] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:29.330 passed 00:10:29.330 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 08:22:16.332729] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:29.330 [2024-05-15 08:22:16.332745] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:29.330 passed 00:10:29.330 Test: verify: APPTAG correct, APPTAG check ...passed 00:10:29.330 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 08:22:16.332784] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:10:29.330 passed 00:10:29.330 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:10:29.330 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:10:29.330 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:10:29.330 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 08:22:16.332896] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:10:29.330 passed 00:10:29.330 Test: generate copy: DIF generated, GUARD check ...passed 00:10:29.330 Test: generate copy: DIF generated, APTTAG check ...passed 00:10:29.330 Test: generate copy: DIF generated, REFTAG check ...passed 00:10:29.330 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:10:29.330 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:10:29.330 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:10:29.330 Test: generate copy: iovecs-len validate ...[2024-05-15 08:22:16.333063] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:10:29.330 passed 00:10:29.330 Test: generate copy: buffer alignment validate ...passed 00:10:29.330 00:10:29.330 Run Summary: Type Total Ran Passed Failed Inactive 00:10:29.330 suites 1 1 n/a 0 0 00:10:29.330 tests 20 20 20 0 0 00:10:29.330 asserts 204 204 204 0 n/a 00:10:29.330 00:10:29.330 Elapsed time = 0.000 seconds 00:10:29.762 00:10:29.762 real 0m0.432s 00:10:29.762 user 0m0.652s 00:10:29.762 sys 0m0.137s 00:10:29.762 08:22:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:29.762 08:22:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:10:29.762 ************************************ 00:10:29.762 END TEST accel_dif_functional_tests 00:10:29.762 ************************************ 00:10:29.762 00:10:29.762 real 0m31.586s 00:10:29.762 user 0m35.384s 00:10:29.762 sys 0m4.226s 00:10:29.762 08:22:16 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:29.762 08:22:16 accel -- common/autotest_common.sh@10 -- # set +x 00:10:29.762 ************************************ 00:10:29.762 END TEST accel 00:10:29.762 ************************************ 00:10:29.762 08:22:16 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:29.762 08:22:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:29.762 08:22:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:29.762 08:22:16 -- common/autotest_common.sh@10 -- # set +x 00:10:29.762 ************************************ 00:10:29.762 START TEST accel_rpc 00:10:29.762 ************************************ 00:10:29.762 08:22:16 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:29.762 * Looking for test storage... 00:10:29.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:10:29.762 08:22:16 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:29.762 08:22:16 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=156819 00:10:29.762 08:22:16 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:10:29.762 08:22:16 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 156819 00:10:29.762 08:22:16 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 156819 ']' 00:10:29.762 08:22:16 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.762 08:22:16 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:29.762 08:22:16 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.762 08:22:16 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:29.762 08:22:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.762 [2024-05-15 08:22:16.768271] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:29.762 [2024-05-15 08:22:16.768318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156819 ] 00:10:30.019 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.019 [2024-05-15 08:22:16.820922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.019 [2024-05-15 08:22:16.898710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.582 08:22:17 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:30.582 08:22:17 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:10:30.582 08:22:17 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:10:30.582 08:22:17 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:10:30.582 08:22:17 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:10:30.582 08:22:17 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:10:30.582 08:22:17 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:10:30.582 08:22:17 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:30.582 08:22:17 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:30.582 08:22:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.582 ************************************ 00:10:30.582 START TEST accel_assign_opcode 00:10:30.582 ************************************ 00:10:30.582 08:22:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:10:30.582 08:22:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:10:30.582 08:22:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.582 08:22:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:30.582 [2024-05-15 08:22:17.600783] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:10:30.582 08:22:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.582 08:22:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:10:30.582 08:22:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.582 08:22:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:30.839 [2024-05-15 08:22:17.608790] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:10:30.839 08:22:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.839 08:22:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:10:30.839 08:22:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.839 08:22:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:30.839 08:22:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.839 08:22:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:10:30.839 08:22:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.839 08:22:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:30.839 08:22:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:10:30.839 08:22:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:10:30.839 08:22:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.839 software 00:10:30.839 00:10:30.839 real 0m0.237s 00:10:30.839 user 0m0.037s 00:10:30.839 sys 0m0.011s 00:10:30.839 08:22:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:30.839 08:22:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:30.839 ************************************ 00:10:30.839 END TEST accel_assign_opcode 00:10:30.839 ************************************ 00:10:30.839 08:22:17 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 156819 00:10:30.839 08:22:17 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 156819 ']' 00:10:30.839 08:22:17 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 156819 00:10:31.097 08:22:17 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:10:31.097 08:22:17 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:31.097 08:22:17 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 156819 00:10:31.097 08:22:17 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:31.097 08:22:17 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:31.097 08:22:17 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 156819' 00:10:31.097 killing process with pid 156819 00:10:31.097 08:22:17 accel_rpc -- common/autotest_common.sh@965 -- # kill 156819 00:10:31.097 08:22:17 accel_rpc -- common/autotest_common.sh@970 -- # wait 156819 00:10:31.355 00:10:31.355 real 0m1.607s 00:10:31.355 user 0m1.688s 00:10:31.355 sys 0m0.398s 00:10:31.355 08:22:18 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:31.355 08:22:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.355 ************************************ 00:10:31.355 END TEST accel_rpc 00:10:31.355 ************************************ 00:10:31.355 08:22:18 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:31.355 08:22:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:31.355 08:22:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:31.355 08:22:18 -- common/autotest_common.sh@10 -- # set +x 00:10:31.355 ************************************ 00:10:31.355 START TEST app_cmdline 00:10:31.355 ************************************ 00:10:31.355 08:22:18 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:31.612 * Looking for test storage... 00:10:31.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:31.612 08:22:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:31.612 08:22:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=157126 00:10:31.612 08:22:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 157126 00:10:31.612 08:22:18 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:31.612 08:22:18 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 157126 ']' 00:10:31.612 08:22:18 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.612 08:22:18 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:31.612 08:22:18 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.612 08:22:18 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:31.612 08:22:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:31.612 [2024-05-15 08:22:18.448390] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:31.612 [2024-05-15 08:22:18.448438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157126 ] 00:10:31.612 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.612 [2024-05-15 08:22:18.502578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.612 [2024-05-15 08:22:18.579538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.545 08:22:19 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:32.545 08:22:19 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:10:32.545 08:22:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:10:32.545 { 00:10:32.545 "version": "SPDK v24.05-pre git sha1 f0bf11db4", 00:10:32.545 "fields": { 00:10:32.545 "major": 24, 00:10:32.545 "minor": 5, 00:10:32.545 "patch": 0, 00:10:32.545 "suffix": "-pre", 00:10:32.545 "commit": "f0bf11db4" 00:10:32.545 } 00:10:32.545 } 00:10:32.545 08:22:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:32.545 08:22:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:32.545 08:22:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:32.545 08:22:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:32.545 08:22:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:32.545 08:22:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:32.545 08:22:19 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.545 08:22:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:32.545 08:22:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:32.545 08:22:19 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.545 08:22:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:32.545 08:22:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:32.545 08:22:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:32.545 08:22:19 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:10:32.545 08:22:19 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:32.545 08:22:19 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:32.545 08:22:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:32.545 08:22:19 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:32.545 08:22:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:32.545 08:22:19 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:32.545 08:22:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:32.545 08:22:19 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:32.545 08:22:19 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:32.545 08:22:19 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:32.803 request: 00:10:32.803 { 00:10:32.803 "method": "env_dpdk_get_mem_stats", 00:10:32.803 "req_id": 1 00:10:32.803 } 00:10:32.803 Got JSON-RPC error response 00:10:32.803 response: 00:10:32.803 { 00:10:32.803 "code": -32601, 00:10:32.803 "message": "Method not found" 00:10:32.803 } 00:10:32.803 08:22:19 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:10:32.803 08:22:19 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:32.803 08:22:19 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:32.803 08:22:19 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:32.803 08:22:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 157126 00:10:32.803 08:22:19 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 157126 ']' 00:10:32.803 08:22:19 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 157126 00:10:32.803 08:22:19 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:10:32.803 08:22:19 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:32.803 08:22:19 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 157126 00:10:32.803 08:22:19 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:32.803 08:22:19 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:32.803 08:22:19 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 157126' 00:10:32.803 killing process with pid 157126 00:10:32.803 08:22:19 app_cmdline -- common/autotest_common.sh@965 -- # kill 157126 00:10:32.803 08:22:19 app_cmdline -- common/autotest_common.sh@970 -- # wait 157126 00:10:33.061 00:10:33.061 real 0m1.694s 00:10:33.061 user 0m2.012s 00:10:33.061 sys 0m0.426s 00:10:33.061 08:22:20 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:33.061 08:22:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:33.061 ************************************ 00:10:33.061 END TEST app_cmdline 00:10:33.061 ************************************ 00:10:33.061 08:22:20 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:33.061 08:22:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:33.061 08:22:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:33.061 08:22:20 -- common/autotest_common.sh@10 -- # set +x 00:10:33.061 ************************************ 00:10:33.061 START TEST version 00:10:33.061 ************************************ 00:10:33.061 08:22:20 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:33.319 * Looking for test storage... 00:10:33.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:33.319 08:22:20 version -- app/version.sh@17 -- # get_header_version major 00:10:33.319 08:22:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:33.319 08:22:20 version -- app/version.sh@14 -- # cut -f2 00:10:33.319 08:22:20 version -- app/version.sh@14 -- # tr -d '"' 00:10:33.319 08:22:20 version -- app/version.sh@17 -- # major=24 00:10:33.319 08:22:20 version -- app/version.sh@18 -- # get_header_version minor 00:10:33.319 08:22:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:33.319 08:22:20 version -- app/version.sh@14 -- # cut -f2 00:10:33.319 08:22:20 version -- app/version.sh@14 -- # tr -d '"' 00:10:33.319 08:22:20 version -- app/version.sh@18 -- # minor=5 00:10:33.319 08:22:20 version -- app/version.sh@19 -- # get_header_version patch 00:10:33.319 08:22:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:33.319 08:22:20 version -- app/version.sh@14 -- # tr -d '"' 00:10:33.319 08:22:20 version -- app/version.sh@14 -- # cut -f2 00:10:33.319 08:22:20 version -- app/version.sh@19 -- # patch=0 00:10:33.319 08:22:20 version -- app/version.sh@20 -- # get_header_version suffix 00:10:33.319 08:22:20 version -- app/version.sh@14 -- # cut -f2 00:10:33.319 08:22:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:33.319 08:22:20 version -- app/version.sh@14 -- # tr -d '"' 00:10:33.319 08:22:20 version -- app/version.sh@20 -- # suffix=-pre 00:10:33.319 08:22:20 version -- app/version.sh@22 -- # version=24.5 00:10:33.319 08:22:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:33.319 08:22:20 version -- app/version.sh@28 -- # version=24.5rc0 00:10:33.319 08:22:20 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:33.319 08:22:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:33.319 08:22:20 version -- app/version.sh@30 -- # py_version=24.5rc0 00:10:33.319 08:22:20 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:10:33.319 00:10:33.319 real 0m0.149s 00:10:33.319 user 0m0.076s 00:10:33.319 sys 0m0.107s 00:10:33.319 08:22:20 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:33.319 08:22:20 version -- common/autotest_common.sh@10 -- # set +x 00:10:33.319 ************************************ 00:10:33.319 END TEST version 00:10:33.319 ************************************ 00:10:33.319 08:22:20 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:10:33.319 08:22:20 -- spdk/autotest.sh@194 -- # uname -s 00:10:33.319 08:22:20 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:33.319 08:22:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:33.319 08:22:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:33.319 08:22:20 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:10:33.319 08:22:20 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:10:33.319 08:22:20 -- spdk/autotest.sh@256 -- # timing_exit lib 00:10:33.319 08:22:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:33.319 08:22:20 -- common/autotest_common.sh@10 -- # set +x 00:10:33.319 08:22:20 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:10:33.319 08:22:20 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:10:33.319 08:22:20 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:10:33.319 08:22:20 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:10:33.319 08:22:20 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:10:33.319 08:22:20 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:10:33.319 08:22:20 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:33.319 08:22:20 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:33.319 08:22:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:33.319 08:22:20 -- common/autotest_common.sh@10 -- # set +x 00:10:33.319 ************************************ 00:10:33.319 START TEST nvmf_tcp 00:10:33.320 ************************************ 00:10:33.320 08:22:20 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:33.578 * Looking for test storage... 00:10:33.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.578 08:22:20 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.578 08:22:20 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.578 08:22:20 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.578 08:22:20 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.578 08:22:20 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.578 08:22:20 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.578 08:22:20 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:10:33.578 08:22:20 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:10:33.578 08:22:20 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:33.578 08:22:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:10:33.578 08:22:20 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:33.578 08:22:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:33.578 08:22:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:33.579 08:22:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:33.579 ************************************ 00:10:33.579 START TEST nvmf_example 00:10:33.579 ************************************ 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:33.579 * Looking for test storage... 00:10:33.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.579 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.836 08:22:20 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.836 08:22:20 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.836 08:22:20 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:10:33.837 08:22:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:39.105 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:39.105 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:39.105 Found net devices under 0000:86:00.0: cvl_0_0 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:39.105 Found net devices under 0000:86:00.1: cvl_0_1 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:39.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:10:39.105 00:10:39.105 --- 10.0.0.2 ping statistics --- 00:10:39.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.105 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:10:39.105 00:10:39.105 --- 10.0.0.1 ping statistics --- 00:10:39.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.105 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:39.105 08:22:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:39.106 08:22:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=160518 00:10:39.106 08:22:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:39.106 08:22:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:39.106 08:22:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 160518 00:10:39.106 08:22:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 160518 ']' 00:10:39.106 08:22:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.106 08:22:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:39.106 08:22:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.106 08:22:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:39.106 08:22:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.106 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.672 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:39.672 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:10:39.672 08:22:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:39.672 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.672 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.672 08:22:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.672 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.672 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.672 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.672 08:22:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:39.672 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.672 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.672 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.672 08:22:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:39.672 08:22:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:39.672 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.672 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.931 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.931 08:22:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:39.931 08:22:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:39.931 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.931 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.931 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.931 08:22:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.931 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.931 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.931 08:22:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.931 08:22:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:39.931 08:22:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:39.931 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.902 Initializing NVMe Controllers 00:10:49.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:49.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:49.902 Initialization complete. Launching workers. 00:10:49.902 ======================================================== 00:10:49.902 Latency(us) 00:10:49.902 Device Information : IOPS MiB/s Average min max 00:10:49.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17994.34 70.29 3556.39 491.82 15555.87 00:10:49.902 ======================================================== 00:10:49.902 Total : 17994.34 70.29 3556.39 491.82 15555.87 00:10:49.902 00:10:49.902 08:22:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:49.902 08:22:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:49.902 08:22:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:49.902 08:22:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:10:49.902 08:22:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:49.902 08:22:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:10:49.902 08:22:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:49.902 08:22:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:50.161 rmmod nvme_tcp 00:10:50.161 rmmod nvme_fabrics 00:10:50.161 rmmod nvme_keyring 00:10:50.161 08:22:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:50.161 08:22:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:10:50.161 08:22:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:10:50.161 08:22:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 160518 ']' 00:10:50.161 08:22:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 160518 00:10:50.161 08:22:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 160518 ']' 00:10:50.161 08:22:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 160518 00:10:50.161 08:22:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:10:50.161 08:22:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:50.161 08:22:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 160518 00:10:50.161 08:22:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:10:50.161 08:22:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:10:50.161 08:22:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 160518' 00:10:50.161 killing process with pid 160518 00:10:50.161 08:22:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 160518 00:10:50.161 08:22:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 160518 00:10:50.420 nvmf threads initialize successfully 00:10:50.420 bdev subsystem init successfully 00:10:50.420 created a nvmf target service 00:10:50.420 create targets's poll groups done 00:10:50.420 all subsystems of target started 00:10:50.420 nvmf target is running 00:10:50.420 all subsystems of target stopped 00:10:50.420 destroy targets's poll groups done 00:10:50.420 destroyed the nvmf target service 00:10:50.420 bdev subsystem finish successfully 00:10:50.420 nvmf threads destroy successfully 00:10:50.420 08:22:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:50.420 08:22:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:50.420 08:22:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:50.420 08:22:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:50.420 08:22:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:50.420 08:22:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.420 08:22:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.420 08:22:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.321 08:22:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:52.321 08:22:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:52.321 08:22:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.321 08:22:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.321 00:10:52.321 real 0m18.823s 00:10:52.321 user 0m45.639s 00:10:52.321 sys 0m5.246s 00:10:52.321 08:22:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:52.321 08:22:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.321 ************************************ 00:10:52.321 END TEST nvmf_example 00:10:52.321 ************************************ 00:10:52.581 08:22:39 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:52.581 08:22:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:52.581 08:22:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:52.581 08:22:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:52.581 ************************************ 00:10:52.581 START TEST nvmf_filesystem 00:10:52.581 ************************************ 00:10:52.581 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:52.581 * Looking for test storage... 00:10:52.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:52.582 08:22:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:52.582 #define SPDK_CONFIG_H 00:10:52.582 #define SPDK_CONFIG_APPS 1 00:10:52.582 #define SPDK_CONFIG_ARCH native 00:10:52.582 #undef SPDK_CONFIG_ASAN 00:10:52.582 #undef SPDK_CONFIG_AVAHI 00:10:52.582 #undef SPDK_CONFIG_CET 00:10:52.582 #define SPDK_CONFIG_COVERAGE 1 00:10:52.582 #define SPDK_CONFIG_CROSS_PREFIX 00:10:52.582 #undef SPDK_CONFIG_CRYPTO 00:10:52.582 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:52.582 #undef SPDK_CONFIG_CUSTOMOCF 00:10:52.582 #undef SPDK_CONFIG_DAOS 00:10:52.582 #define SPDK_CONFIG_DAOS_DIR 00:10:52.582 #define SPDK_CONFIG_DEBUG 1 00:10:52.582 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:52.582 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:52.582 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:52.582 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:52.582 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:52.582 #undef SPDK_CONFIG_DPDK_UADK 00:10:52.582 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:52.582 #define SPDK_CONFIG_EXAMPLES 1 00:10:52.582 #undef SPDK_CONFIG_FC 00:10:52.582 #define SPDK_CONFIG_FC_PATH 00:10:52.583 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:52.583 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:52.583 #undef SPDK_CONFIG_FUSE 00:10:52.583 #undef SPDK_CONFIG_FUZZER 00:10:52.583 #define SPDK_CONFIG_FUZZER_LIB 00:10:52.583 #undef SPDK_CONFIG_GOLANG 00:10:52.583 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:52.583 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:52.583 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:52.583 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:10:52.583 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:52.583 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:52.583 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:52.583 #define SPDK_CONFIG_IDXD 1 00:10:52.583 #undef SPDK_CONFIG_IDXD_KERNEL 00:10:52.583 #undef SPDK_CONFIG_IPSEC_MB 00:10:52.583 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:52.583 #define SPDK_CONFIG_ISAL 1 00:10:52.583 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:52.583 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:52.583 #define SPDK_CONFIG_LIBDIR 00:10:52.583 #undef SPDK_CONFIG_LTO 00:10:52.583 #define SPDK_CONFIG_MAX_LCORES 00:10:52.583 #define SPDK_CONFIG_NVME_CUSE 1 00:10:52.583 #undef SPDK_CONFIG_OCF 00:10:52.583 #define SPDK_CONFIG_OCF_PATH 00:10:52.583 #define SPDK_CONFIG_OPENSSL_PATH 00:10:52.583 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:52.583 #define SPDK_CONFIG_PGO_DIR 00:10:52.583 #undef SPDK_CONFIG_PGO_USE 00:10:52.583 #define SPDK_CONFIG_PREFIX /usr/local 00:10:52.583 #undef SPDK_CONFIG_RAID5F 00:10:52.583 #undef SPDK_CONFIG_RBD 00:10:52.583 #define SPDK_CONFIG_RDMA 1 00:10:52.583 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:52.583 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:52.583 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:52.583 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:52.583 #define SPDK_CONFIG_SHARED 1 00:10:52.583 #undef SPDK_CONFIG_SMA 00:10:52.583 #define SPDK_CONFIG_TESTS 1 00:10:52.583 #undef SPDK_CONFIG_TSAN 00:10:52.583 #define SPDK_CONFIG_UBLK 1 00:10:52.583 #define SPDK_CONFIG_UBSAN 1 00:10:52.583 #undef SPDK_CONFIG_UNIT_TESTS 00:10:52.583 #undef SPDK_CONFIG_URING 00:10:52.583 #define SPDK_CONFIG_URING_PATH 00:10:52.583 #undef SPDK_CONFIG_URING_ZNS 00:10:52.583 #undef SPDK_CONFIG_USDT 00:10:52.583 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:52.583 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:52.583 #define SPDK_CONFIG_VFIO_USER 1 00:10:52.583 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:52.583 #define SPDK_CONFIG_VHOST 1 00:10:52.583 #define SPDK_CONFIG_VIRTIO 1 00:10:52.583 #undef SPDK_CONFIG_VTUNE 00:10:52.583 #define SPDK_CONFIG_VTUNE_DIR 00:10:52.583 #define SPDK_CONFIG_WERROR 1 00:10:52.583 #define SPDK_CONFIG_WPDK_DIR 00:10:52.583 #undef SPDK_CONFIG_XNVME 00:10:52.583 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:52.583 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:10:52.584 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j96 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 162939 ]] 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 162939 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.oLjWxE 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.oLjWxE/tests/target /tmp/spdk.oLjWxE 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=972767232 00:10:52.585 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4311662592 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=190668578816 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=195974311936 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5305733120 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=97983778816 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=97987153920 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=39185494016 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=39194865664 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9371648 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=97986990080 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=97987158016 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=167936 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=19597426688 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=19597430784 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:10:52.844 * Looking for test storage... 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=190668578816 00:10:52.844 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=7520325632 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:52.845 08:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:58.109 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:58.109 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:58.109 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:58.110 Found net devices under 0000:86:00.0: cvl_0_0 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:58.110 Found net devices under 0000:86:00.1: cvl_0_1 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:58.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:10:58.110 00:10:58.110 --- 10.0.0.2 ping statistics --- 00:10:58.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.110 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:58.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:10:58.110 00:10:58.110 --- 10.0.0.1 ping statistics --- 00:10:58.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.110 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.110 08:22:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.110 ************************************ 00:10:58.110 START TEST nvmf_filesystem_no_in_capsule 00:10:58.110 ************************************ 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=165960 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 165960 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 165960 ']' 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:58.110 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.368 [2024-05-15 08:22:45.137915] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:10:58.368 [2024-05-15 08:22:45.137960] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.368 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.368 [2024-05-15 08:22:45.195996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.368 [2024-05-15 08:22:45.279812] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.368 [2024-05-15 08:22:45.279849] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.368 [2024-05-15 08:22:45.279860] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.368 [2024-05-15 08:22:45.279868] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.368 [2024-05-15 08:22:45.279874] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.368 [2024-05-15 08:22:45.279926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.368 [2024-05-15 08:22:45.280020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.368 [2024-05-15 08:22:45.280103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.368 [2024-05-15 08:22:45.280106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.933 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:58.933 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:10:58.933 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:58.933 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:58.933 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.192 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.192 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:59.192 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:59.192 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.192 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.192 [2024-05-15 08:22:45.977086] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.192 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.192 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:59.192 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.192 08:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.192 Malloc1 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.192 [2024-05-15 08:22:46.123783] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:59.192 [2024-05-15 08:22:46.124021] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:10:59.192 { 00:10:59.192 "name": "Malloc1", 00:10:59.192 "aliases": [ 00:10:59.192 "cc5f7970-b469-4c14-81a5-d12a0cc80c5c" 00:10:59.192 ], 00:10:59.192 "product_name": "Malloc disk", 00:10:59.192 "block_size": 512, 00:10:59.192 "num_blocks": 1048576, 00:10:59.192 "uuid": "cc5f7970-b469-4c14-81a5-d12a0cc80c5c", 00:10:59.192 "assigned_rate_limits": { 00:10:59.192 "rw_ios_per_sec": 0, 00:10:59.192 "rw_mbytes_per_sec": 0, 00:10:59.192 "r_mbytes_per_sec": 0, 00:10:59.192 "w_mbytes_per_sec": 0 00:10:59.192 }, 00:10:59.192 "claimed": true, 00:10:59.192 "claim_type": "exclusive_write", 00:10:59.192 "zoned": false, 00:10:59.192 "supported_io_types": { 00:10:59.192 "read": true, 00:10:59.192 "write": true, 00:10:59.192 "unmap": true, 00:10:59.192 "write_zeroes": true, 00:10:59.192 "flush": true, 00:10:59.192 "reset": true, 00:10:59.192 "compare": false, 00:10:59.192 "compare_and_write": false, 00:10:59.192 "abort": true, 00:10:59.192 "nvme_admin": false, 00:10:59.192 "nvme_io": false 00:10:59.192 }, 00:10:59.192 "memory_domains": [ 00:10:59.192 { 00:10:59.192 "dma_device_id": "system", 00:10:59.192 "dma_device_type": 1 00:10:59.192 }, 00:10:59.192 { 00:10:59.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.192 "dma_device_type": 2 00:10:59.192 } 00:10:59.192 ], 00:10:59.192 "driver_specific": {} 00:10:59.192 } 00:10:59.192 ]' 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:10:59.192 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:10:59.450 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:10:59.450 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:10:59.450 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:10:59.450 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:59.450 08:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:00.395 08:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:00.395 08:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:11:00.395 08:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.395 08:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:00.395 08:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:11:02.918 08:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:02.918 08:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:02.918 08:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:02.918 08:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:02.918 08:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:02.918 08:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:11:02.918 08:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:02.918 08:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:02.918 08:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:02.918 08:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:02.918 08:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:02.918 08:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:02.918 08:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:02.918 08:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:02.918 08:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:02.918 08:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:02.918 08:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:02.918 08:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:03.176 08:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:04.548 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:04.549 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:04.549 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:04.549 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:04.549 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.549 ************************************ 00:11:04.549 START TEST filesystem_ext4 00:11:04.549 ************************************ 00:11:04.549 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:04.549 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:04.549 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:04.549 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:04.549 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:11:04.549 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:11:04.549 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:11:04.549 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:11:04.549 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:11:04.549 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:11:04.549 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:04.549 mke2fs 1.46.5 (30-Dec-2021) 00:11:04.549 Discarding device blocks: 0/522240 done 00:11:04.549 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:04.549 Filesystem UUID: ba721b58-3eec-4705-93c6-089621caecfd 00:11:04.549 Superblock backups stored on blocks: 00:11:04.549 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:04.549 00:11:04.549 Allocating group tables: 0/64 done 00:11:04.549 Writing inode tables: 0/64 done 00:11:04.549 Creating journal (8192 blocks): done 00:11:04.549 Writing superblocks and filesystem accounting information: 0/64 done 00:11:04.549 00:11:04.549 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:11:04.549 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 165960 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:04.807 00:11:04.807 real 0m0.486s 00:11:04.807 user 0m0.032s 00:11:04.807 sys 0m0.057s 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:04.807 ************************************ 00:11:04.807 END TEST filesystem_ext4 00:11:04.807 ************************************ 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.807 ************************************ 00:11:04.807 START TEST filesystem_btrfs 00:11:04.807 ************************************ 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:11:04.807 08:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:05.372 btrfs-progs v6.6.2 00:11:05.372 See https://btrfs.readthedocs.io for more information. 00:11:05.372 00:11:05.372 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:05.372 NOTE: several default settings have changed in version 5.15, please make sure 00:11:05.372 this does not affect your deployments: 00:11:05.372 - DUP for metadata (-m dup) 00:11:05.372 - enabled no-holes (-O no-holes) 00:11:05.372 - enabled free-space-tree (-R free-space-tree) 00:11:05.372 00:11:05.372 Label: (null) 00:11:05.372 UUID: 0dfccd2f-e9ce-42f3-8901-7e353062d4d4 00:11:05.372 Node size: 16384 00:11:05.372 Sector size: 4096 00:11:05.372 Filesystem size: 510.00MiB 00:11:05.372 Block group profiles: 00:11:05.372 Data: single 8.00MiB 00:11:05.372 Metadata: DUP 32.00MiB 00:11:05.372 System: DUP 8.00MiB 00:11:05.372 SSD detected: yes 00:11:05.372 Zoned device: no 00:11:05.372 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:05.372 Runtime features: free-space-tree 00:11:05.372 Checksum: crc32c 00:11:05.372 Number of devices: 1 00:11:05.372 Devices: 00:11:05.372 ID SIZE PATH 00:11:05.372 1 510.00MiB /dev/nvme0n1p1 00:11:05.372 00:11:05.372 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:11:05.372 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:05.373 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:05.373 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:05.373 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 165960 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:05.631 00:11:05.631 real 0m0.675s 00:11:05.631 user 0m0.027s 00:11:05.631 sys 0m0.173s 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:05.631 ************************************ 00:11:05.631 END TEST filesystem_btrfs 00:11:05.631 ************************************ 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.631 ************************************ 00:11:05.631 START TEST filesystem_xfs 00:11:05.631 ************************************ 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:11:05.631 08:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:05.631 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:05.631 = sectsz=512 attr=2, projid32bit=1 00:11:05.631 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:05.631 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:05.631 data = bsize=4096 blocks=130560, imaxpct=25 00:11:05.631 = sunit=0 swidth=0 blks 00:11:05.631 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:05.631 log =internal log bsize=4096 blocks=16384, version=2 00:11:05.631 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:05.631 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:06.564 Discarding blocks...Done. 00:11:06.564 08:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:11:06.564 08:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:09.090 08:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:09.090 08:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:09.090 08:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:09.090 08:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:09.091 08:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:09.091 08:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:09.091 08:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 165960 00:11:09.091 08:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:09.091 08:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:09.091 08:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:09.091 08:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:09.091 00:11:09.091 real 0m3.340s 00:11:09.091 user 0m0.024s 00:11:09.091 sys 0m0.105s 00:11:09.091 08:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:09.091 08:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:09.091 ************************************ 00:11:09.091 END TEST filesystem_xfs 00:11:09.091 ************************************ 00:11:09.091 08:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:09.091 08:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:09.091 08:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:09.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.091 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:09.091 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:11:09.091 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:09.091 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.091 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:09.091 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.091 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:11:09.091 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:09.091 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.091 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.091 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.091 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:09.091 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 165960 00:11:09.091 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 165960 ']' 00:11:09.091 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 165960 00:11:09.091 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:11:09.091 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:09.091 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 165960 00:11:09.349 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:09.349 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:09.349 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 165960' 00:11:09.349 killing process with pid 165960 00:11:09.349 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 165960 00:11:09.349 [2024-05-15 08:22:56.141683] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:09.349 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 165960 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:09.608 00:11:09.608 real 0m11.436s 00:11:09.608 user 0m44.758s 00:11:09.608 sys 0m1.275s 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.608 ************************************ 00:11:09.608 END TEST nvmf_filesystem_no_in_capsule 00:11:09.608 ************************************ 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:09.608 ************************************ 00:11:09.608 START TEST nvmf_filesystem_in_capsule 00:11:09.608 ************************************ 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=168250 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 168250 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 168250 ']' 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:09.608 08:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.866 [2024-05-15 08:22:56.640367] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:11:09.866 [2024-05-15 08:22:56.640407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.866 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.867 [2024-05-15 08:22:56.695675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.867 [2024-05-15 08:22:56.767855] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.867 [2024-05-15 08:22:56.767894] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.867 [2024-05-15 08:22:56.767903] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.867 [2024-05-15 08:22:56.767911] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.867 [2024-05-15 08:22:56.767916] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.867 [2024-05-15 08:22:56.767968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.867 [2024-05-15 08:22:56.768066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.867 [2024-05-15 08:22:56.768153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.867 [2024-05-15 08:22:56.768155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.432 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:10.432 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:11:10.432 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:10.432 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.432 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.690 [2024-05-15 08:22:57.488212] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.690 Malloc1 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.690 [2024-05-15 08:22:57.634643] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:10.690 [2024-05-15 08:22:57.634894] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.690 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:11:10.690 { 00:11:10.690 "name": "Malloc1", 00:11:10.690 "aliases": [ 00:11:10.690 "77ba4493-d3ee-4c33-9440-7476acdaa3fc" 00:11:10.690 ], 00:11:10.690 "product_name": "Malloc disk", 00:11:10.690 "block_size": 512, 00:11:10.690 "num_blocks": 1048576, 00:11:10.690 "uuid": "77ba4493-d3ee-4c33-9440-7476acdaa3fc", 00:11:10.690 "assigned_rate_limits": { 00:11:10.690 "rw_ios_per_sec": 0, 00:11:10.690 "rw_mbytes_per_sec": 0, 00:11:10.690 "r_mbytes_per_sec": 0, 00:11:10.690 "w_mbytes_per_sec": 0 00:11:10.690 }, 00:11:10.690 "claimed": true, 00:11:10.690 "claim_type": "exclusive_write", 00:11:10.690 "zoned": false, 00:11:10.690 "supported_io_types": { 00:11:10.690 "read": true, 00:11:10.690 "write": true, 00:11:10.690 "unmap": true, 00:11:10.690 "write_zeroes": true, 00:11:10.690 "flush": true, 00:11:10.690 "reset": true, 00:11:10.690 "compare": false, 00:11:10.690 "compare_and_write": false, 00:11:10.690 "abort": true, 00:11:10.690 "nvme_admin": false, 00:11:10.691 "nvme_io": false 00:11:10.691 }, 00:11:10.691 "memory_domains": [ 00:11:10.691 { 00:11:10.691 "dma_device_id": "system", 00:11:10.691 "dma_device_type": 1 00:11:10.691 }, 00:11:10.691 { 00:11:10.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.691 "dma_device_type": 2 00:11:10.691 } 00:11:10.691 ], 00:11:10.691 "driver_specific": {} 00:11:10.691 } 00:11:10.691 ]' 00:11:10.691 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:11:10.691 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:11:10.691 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:11:10.949 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:11:10.949 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:11:10.949 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:11:10.949 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:10.949 08:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:12.323 08:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:12.323 08:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:11:12.323 08:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:12.323 08:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:12.323 08:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:11:14.222 08:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:14.222 08:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:14.222 08:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:14.222 08:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:14.222 08:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:14.222 08:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:11:14.222 08:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:14.222 08:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:14.222 08:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:14.222 08:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:14.222 08:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:14.222 08:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:14.222 08:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:14.222 08:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:14.222 08:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:14.222 08:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:14.222 08:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:14.222 08:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:14.787 08:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:15.721 08:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:15.721 08:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:15.721 08:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:15.721 08:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:15.721 08:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.721 ************************************ 00:11:15.721 START TEST filesystem_in_capsule_ext4 00:11:15.721 ************************************ 00:11:15.721 08:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:15.721 08:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:15.721 08:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:15.721 08:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:15.721 08:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:11:15.721 08:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:11:15.721 08:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:11:15.721 08:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:11:15.721 08:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:11:15.721 08:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:11:15.721 08:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:15.721 mke2fs 1.46.5 (30-Dec-2021) 00:11:15.721 Discarding device blocks: 0/522240 done 00:11:15.721 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:15.721 Filesystem UUID: cea83eba-fcd8-4280-a9bd-737c60bba964 00:11:15.721 Superblock backups stored on blocks: 00:11:15.721 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:15.721 00:11:15.721 Allocating group tables: 0/64 done 00:11:15.721 Writing inode tables: 0/64 done 00:11:18.247 Creating journal (8192 blocks): done 00:11:19.181 Writing superblocks and filesystem accounting information: 0/64 done 00:11:19.181 00:11:19.181 08:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:11:19.181 08:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:19.748 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:20.006 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:20.006 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:20.006 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:20.006 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:20.006 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:20.006 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 168250 00:11:20.006 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:20.006 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:20.006 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:20.006 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:20.006 00:11:20.006 real 0m4.290s 00:11:20.006 user 0m0.030s 00:11:20.006 sys 0m0.063s 00:11:20.006 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:20.006 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:20.006 ************************************ 00:11:20.006 END TEST filesystem_in_capsule_ext4 00:11:20.006 ************************************ 00:11:20.006 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:20.006 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:20.006 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:20.006 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.006 ************************************ 00:11:20.006 START TEST filesystem_in_capsule_btrfs 00:11:20.006 ************************************ 00:11:20.006 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:20.007 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:20.007 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:20.007 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:20.007 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:11:20.007 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:11:20.007 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:11:20.007 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:11:20.007 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:11:20.007 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:11:20.007 08:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:20.265 btrfs-progs v6.6.2 00:11:20.265 See https://btrfs.readthedocs.io for more information. 00:11:20.265 00:11:20.265 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:20.265 NOTE: several default settings have changed in version 5.15, please make sure 00:11:20.265 this does not affect your deployments: 00:11:20.265 - DUP for metadata (-m dup) 00:11:20.265 - enabled no-holes (-O no-holes) 00:11:20.265 - enabled free-space-tree (-R free-space-tree) 00:11:20.265 00:11:20.265 Label: (null) 00:11:20.265 UUID: b723940c-7204-4709-82e2-24c25de798f1 00:11:20.265 Node size: 16384 00:11:20.265 Sector size: 4096 00:11:20.265 Filesystem size: 510.00MiB 00:11:20.265 Block group profiles: 00:11:20.265 Data: single 8.00MiB 00:11:20.265 Metadata: DUP 32.00MiB 00:11:20.265 System: DUP 8.00MiB 00:11:20.265 SSD detected: yes 00:11:20.265 Zoned device: no 00:11:20.265 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:20.265 Runtime features: free-space-tree 00:11:20.265 Checksum: crc32c 00:11:20.265 Number of devices: 1 00:11:20.265 Devices: 00:11:20.265 ID SIZE PATH 00:11:20.265 1 510.00MiB /dev/nvme0n1p1 00:11:20.265 00:11:20.265 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:11:20.265 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:20.831 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:20.831 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:20.831 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:20.831 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:20.831 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:20.831 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 168250 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:21.090 00:11:21.090 real 0m0.957s 00:11:21.090 user 0m0.022s 00:11:21.090 sys 0m0.132s 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:21.090 ************************************ 00:11:21.090 END TEST filesystem_in_capsule_btrfs 00:11:21.090 ************************************ 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.090 ************************************ 00:11:21.090 START TEST filesystem_in_capsule_xfs 00:11:21.090 ************************************ 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:11:21.090 08:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:21.090 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:21.090 = sectsz=512 attr=2, projid32bit=1 00:11:21.090 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:21.090 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:21.090 data = bsize=4096 blocks=130560, imaxpct=25 00:11:21.090 = sunit=0 swidth=0 blks 00:11:21.090 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:21.090 log =internal log bsize=4096 blocks=16384, version=2 00:11:21.090 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:21.090 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:22.031 Discarding blocks...Done. 00:11:22.031 08:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:11:22.031 08:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 168250 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:24.567 00:11:24.567 real 0m3.163s 00:11:24.567 user 0m0.024s 00:11:24.567 sys 0m0.072s 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:24.567 ************************************ 00:11:24.567 END TEST filesystem_in_capsule_xfs 00:11:24.567 ************************************ 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:24.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 168250 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 168250 ']' 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 168250 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:24.567 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 168250 00:11:24.827 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:24.827 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:24.827 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 168250' 00:11:24.827 killing process with pid 168250 00:11:24.827 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 168250 00:11:24.827 [2024-05-15 08:23:11.611112] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:24.827 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 168250 00:11:25.087 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:25.087 00:11:25.087 real 0m15.393s 00:11:25.087 user 1m0.567s 00:11:25.087 sys 0m1.263s 00:11:25.087 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:25.087 08:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.087 ************************************ 00:11:25.088 END TEST nvmf_filesystem_in_capsule 00:11:25.088 ************************************ 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:25.088 rmmod nvme_tcp 00:11:25.088 rmmod nvme_fabrics 00:11:25.088 rmmod nvme_keyring 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:25.088 08:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.629 08:23:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:27.629 00:11:27.629 real 0m34.745s 00:11:27.629 user 1m47.080s 00:11:27.629 sys 0m6.687s 00:11:27.630 08:23:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:27.630 08:23:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.630 ************************************ 00:11:27.630 END TEST nvmf_filesystem 00:11:27.630 ************************************ 00:11:27.630 08:23:14 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:27.630 08:23:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:27.630 08:23:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:27.630 08:23:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:27.630 ************************************ 00:11:27.630 START TEST nvmf_target_discovery 00:11:27.630 ************************************ 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:27.630 * Looking for test storage... 00:11:27.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:11:27.630 08:23:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:32.906 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:32.906 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:32.906 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:32.907 Found net devices under 0000:86:00.0: cvl_0_0 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:32.907 Found net devices under 0000:86:00.1: cvl_0_1 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:32.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:11:32.907 00:11:32.907 --- 10.0.0.2 ping statistics --- 00:11:32.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.907 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:32.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:11:32.907 00:11:32.907 --- 10.0.0.1 ping statistics --- 00:11:32.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.907 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=174310 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 174310 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 174310 ']' 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:32.907 08:23:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.907 [2024-05-15 08:23:19.605900] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:11:32.907 [2024-05-15 08:23:19.605942] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.907 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.907 [2024-05-15 08:23:19.661549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.907 [2024-05-15 08:23:19.741709] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.907 [2024-05-15 08:23:19.741742] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.907 [2024-05-15 08:23:19.741751] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.907 [2024-05-15 08:23:19.741759] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.907 [2024-05-15 08:23:19.741765] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.907 [2024-05-15 08:23:19.741810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.907 [2024-05-15 08:23:19.741903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.907 [2024-05-15 08:23:19.741975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.907 [2024-05-15 08:23:19.741978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.474 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:33.474 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:11:33.474 08:23:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:33.474 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:33.474 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.474 08:23:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.474 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:33.474 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.474 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.474 [2024-05-15 08:23:20.464313] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.474 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.474 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:33.474 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:33.474 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:33.474 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.474 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.732 Null1 00:11:33.732 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.732 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:33.732 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.732 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.732 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.732 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:33.732 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.732 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.732 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.732 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.732 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.732 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.732 [2024-05-15 08:23:20.521639] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:33.732 [2024-05-15 08:23:20.521849] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.732 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.732 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:33.732 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.733 Null2 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.733 Null3 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.733 Null4 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.733 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:33.991 00:11:33.991 Discovery Log Number of Records 6, Generation counter 6 00:11:33.991 =====Discovery Log Entry 0====== 00:11:33.991 trtype: tcp 00:11:33.991 adrfam: ipv4 00:11:33.991 subtype: current discovery subsystem 00:11:33.991 treq: not required 00:11:33.991 portid: 0 00:11:33.991 trsvcid: 4420 00:11:33.991 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:33.991 traddr: 10.0.0.2 00:11:33.991 eflags: explicit discovery connections, duplicate discovery information 00:11:33.991 sectype: none 00:11:33.991 =====Discovery Log Entry 1====== 00:11:33.991 trtype: tcp 00:11:33.991 adrfam: ipv4 00:11:33.991 subtype: nvme subsystem 00:11:33.991 treq: not required 00:11:33.991 portid: 0 00:11:33.991 trsvcid: 4420 00:11:33.991 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:33.991 traddr: 10.0.0.2 00:11:33.991 eflags: none 00:11:33.991 sectype: none 00:11:33.991 =====Discovery Log Entry 2====== 00:11:33.992 trtype: tcp 00:11:33.992 adrfam: ipv4 00:11:33.992 subtype: nvme subsystem 00:11:33.992 treq: not required 00:11:33.992 portid: 0 00:11:33.992 trsvcid: 4420 00:11:33.992 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:33.992 traddr: 10.0.0.2 00:11:33.992 eflags: none 00:11:33.992 sectype: none 00:11:33.992 =====Discovery Log Entry 3====== 00:11:33.992 trtype: tcp 00:11:33.992 adrfam: ipv4 00:11:33.992 subtype: nvme subsystem 00:11:33.992 treq: not required 00:11:33.992 portid: 0 00:11:33.992 trsvcid: 4420 00:11:33.992 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:33.992 traddr: 10.0.0.2 00:11:33.992 eflags: none 00:11:33.992 sectype: none 00:11:33.992 =====Discovery Log Entry 4====== 00:11:33.992 trtype: tcp 00:11:33.992 adrfam: ipv4 00:11:33.992 subtype: nvme subsystem 00:11:33.992 treq: not required 00:11:33.992 portid: 0 00:11:33.992 trsvcid: 4420 00:11:33.992 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:33.992 traddr: 10.0.0.2 00:11:33.992 eflags: none 00:11:33.992 sectype: none 00:11:33.992 =====Discovery Log Entry 5====== 00:11:33.992 trtype: tcp 00:11:33.992 adrfam: ipv4 00:11:33.992 subtype: discovery subsystem referral 00:11:33.992 treq: not required 00:11:33.992 portid: 0 00:11:33.992 trsvcid: 4430 00:11:33.992 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:33.992 traddr: 10.0.0.2 00:11:33.992 eflags: none 00:11:33.992 sectype: none 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:33.992 Perform nvmf subsystem discovery via RPC 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.992 [ 00:11:33.992 { 00:11:33.992 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:33.992 "subtype": "Discovery", 00:11:33.992 "listen_addresses": [ 00:11:33.992 { 00:11:33.992 "trtype": "TCP", 00:11:33.992 "adrfam": "IPv4", 00:11:33.992 "traddr": "10.0.0.2", 00:11:33.992 "trsvcid": "4420" 00:11:33.992 } 00:11:33.992 ], 00:11:33.992 "allow_any_host": true, 00:11:33.992 "hosts": [] 00:11:33.992 }, 00:11:33.992 { 00:11:33.992 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:33.992 "subtype": "NVMe", 00:11:33.992 "listen_addresses": [ 00:11:33.992 { 00:11:33.992 "trtype": "TCP", 00:11:33.992 "adrfam": "IPv4", 00:11:33.992 "traddr": "10.0.0.2", 00:11:33.992 "trsvcid": "4420" 00:11:33.992 } 00:11:33.992 ], 00:11:33.992 "allow_any_host": true, 00:11:33.992 "hosts": [], 00:11:33.992 "serial_number": "SPDK00000000000001", 00:11:33.992 "model_number": "SPDK bdev Controller", 00:11:33.992 "max_namespaces": 32, 00:11:33.992 "min_cntlid": 1, 00:11:33.992 "max_cntlid": 65519, 00:11:33.992 "namespaces": [ 00:11:33.992 { 00:11:33.992 "nsid": 1, 00:11:33.992 "bdev_name": "Null1", 00:11:33.992 "name": "Null1", 00:11:33.992 "nguid": "5DB16843ABA947AF8E91764DAA2A2D7C", 00:11:33.992 "uuid": "5db16843-aba9-47af-8e91-764daa2a2d7c" 00:11:33.992 } 00:11:33.992 ] 00:11:33.992 }, 00:11:33.992 { 00:11:33.992 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:33.992 "subtype": "NVMe", 00:11:33.992 "listen_addresses": [ 00:11:33.992 { 00:11:33.992 "trtype": "TCP", 00:11:33.992 "adrfam": "IPv4", 00:11:33.992 "traddr": "10.0.0.2", 00:11:33.992 "trsvcid": "4420" 00:11:33.992 } 00:11:33.992 ], 00:11:33.992 "allow_any_host": true, 00:11:33.992 "hosts": [], 00:11:33.992 "serial_number": "SPDK00000000000002", 00:11:33.992 "model_number": "SPDK bdev Controller", 00:11:33.992 "max_namespaces": 32, 00:11:33.992 "min_cntlid": 1, 00:11:33.992 "max_cntlid": 65519, 00:11:33.992 "namespaces": [ 00:11:33.992 { 00:11:33.992 "nsid": 1, 00:11:33.992 "bdev_name": "Null2", 00:11:33.992 "name": "Null2", 00:11:33.992 "nguid": "A449AB1E73464EB692340E2AA5E1643F", 00:11:33.992 "uuid": "a449ab1e-7346-4eb6-9234-0e2aa5e1643f" 00:11:33.992 } 00:11:33.992 ] 00:11:33.992 }, 00:11:33.992 { 00:11:33.992 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:33.992 "subtype": "NVMe", 00:11:33.992 "listen_addresses": [ 00:11:33.992 { 00:11:33.992 "trtype": "TCP", 00:11:33.992 "adrfam": "IPv4", 00:11:33.992 "traddr": "10.0.0.2", 00:11:33.992 "trsvcid": "4420" 00:11:33.992 } 00:11:33.992 ], 00:11:33.992 "allow_any_host": true, 00:11:33.992 "hosts": [], 00:11:33.992 "serial_number": "SPDK00000000000003", 00:11:33.992 "model_number": "SPDK bdev Controller", 00:11:33.992 "max_namespaces": 32, 00:11:33.992 "min_cntlid": 1, 00:11:33.992 "max_cntlid": 65519, 00:11:33.992 "namespaces": [ 00:11:33.992 { 00:11:33.992 "nsid": 1, 00:11:33.992 "bdev_name": "Null3", 00:11:33.992 "name": "Null3", 00:11:33.992 "nguid": "7C6549EA89664358A562B1706A841C94", 00:11:33.992 "uuid": "7c6549ea-8966-4358-a562-b1706a841c94" 00:11:33.992 } 00:11:33.992 ] 00:11:33.992 }, 00:11:33.992 { 00:11:33.992 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:33.992 "subtype": "NVMe", 00:11:33.992 "listen_addresses": [ 00:11:33.992 { 00:11:33.992 "trtype": "TCP", 00:11:33.992 "adrfam": "IPv4", 00:11:33.992 "traddr": "10.0.0.2", 00:11:33.992 "trsvcid": "4420" 00:11:33.992 } 00:11:33.992 ], 00:11:33.992 "allow_any_host": true, 00:11:33.992 "hosts": [], 00:11:33.992 "serial_number": "SPDK00000000000004", 00:11:33.992 "model_number": "SPDK bdev Controller", 00:11:33.992 "max_namespaces": 32, 00:11:33.992 "min_cntlid": 1, 00:11:33.992 "max_cntlid": 65519, 00:11:33.992 "namespaces": [ 00:11:33.992 { 00:11:33.992 "nsid": 1, 00:11:33.992 "bdev_name": "Null4", 00:11:33.992 "name": "Null4", 00:11:33.992 "nguid": "9DBD8516A991488694D6709C165B4EC3", 00:11:33.992 "uuid": "9dbd8516-a991-4886-94d6-709c165b4ec3" 00:11:33.992 } 00:11:33.992 ] 00:11:33.992 } 00:11:33.992 ] 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:33.992 08:23:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:33.992 rmmod nvme_tcp 00:11:34.251 rmmod nvme_fabrics 00:11:34.251 rmmod nvme_keyring 00:11:34.251 08:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:34.251 08:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:11:34.251 08:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:11:34.251 08:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 174310 ']' 00:11:34.251 08:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 174310 00:11:34.251 08:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 174310 ']' 00:11:34.251 08:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 174310 00:11:34.251 08:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:11:34.251 08:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:34.251 08:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 174310 00:11:34.251 08:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:34.251 08:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:34.251 08:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 174310' 00:11:34.251 killing process with pid 174310 00:11:34.251 08:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 174310 00:11:34.251 [2024-05-15 08:23:21.093064] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:34.251 08:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 174310 00:11:34.509 08:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:34.509 08:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:34.509 08:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:34.509 08:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:34.509 08:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:34.509 08:23:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.509 08:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.509 08:23:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.410 08:23:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:36.410 00:11:36.410 real 0m9.158s 00:11:36.410 user 0m7.839s 00:11:36.410 sys 0m4.254s 00:11:36.410 08:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:36.410 08:23:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.410 ************************************ 00:11:36.410 END TEST nvmf_target_discovery 00:11:36.410 ************************************ 00:11:36.410 08:23:23 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:36.410 08:23:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:36.410 08:23:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:36.410 08:23:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:36.410 ************************************ 00:11:36.410 START TEST nvmf_referrals 00:11:36.410 ************************************ 00:11:36.410 08:23:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:36.670 * Looking for test storage... 00:11:36.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:11:36.670 08:23:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:41.936 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:41.937 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:41.937 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:41.937 Found net devices under 0000:86:00.0: cvl_0_0 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:41.937 Found net devices under 0000:86:00.1: cvl_0_1 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:41.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:11:41.937 00:11:41.937 --- 10.0.0.2 ping statistics --- 00:11:41.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.937 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:11:41.937 00:11:41.937 --- 10.0.0.1 ping statistics --- 00:11:41.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.937 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=177982 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 177982 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 177982 ']' 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:41.937 08:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:41.937 [2024-05-15 08:23:28.353410] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:11:41.937 [2024-05-15 08:23:28.353453] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.937 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.937 [2024-05-15 08:23:28.407736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.937 [2024-05-15 08:23:28.489717] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.937 [2024-05-15 08:23:28.489750] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.937 [2024-05-15 08:23:28.489760] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.937 [2024-05-15 08:23:28.489767] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.937 [2024-05-15 08:23:28.489773] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.937 [2024-05-15 08:23:28.489825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.937 [2024-05-15 08:23:28.489860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.937 [2024-05-15 08:23:28.489842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.937 [2024-05-15 08:23:28.489863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.195 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:42.195 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:11:42.195 08:23:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:42.195 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.195 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.195 08:23:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.195 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.195 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.195 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.195 [2024-05-15 08:23:29.202168] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.195 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.195 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:42.195 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.195 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.195 [2024-05-15 08:23:29.215357] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:42.195 [2024-05-15 08:23:29.215559] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:42.454 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:42.712 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.970 08:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.970 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:42.970 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:42.970 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:42.970 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:42.970 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:42.970 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:42.970 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:42.970 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:42.970 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:42.970 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:42.970 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:42.970 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:42.970 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:42.970 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:42.970 08:23:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:43.227 08:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.485 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:43.485 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:43.485 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:43.485 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:43.485 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:43.485 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:43.485 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:43.485 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:43.485 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:43.485 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:43.485 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:43.485 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:43.485 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:43.485 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:43.485 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:43.742 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:43.742 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:43.742 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:43.742 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:43.742 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:43.742 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:43.742 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:43.742 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:43.742 08:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.742 08:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.742 08:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.742 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:43.742 08:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.742 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:43.742 08:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.742 08:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.001 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:44.001 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:44.001 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:44.001 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:44.001 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:44.001 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.001 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:44.001 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:44.001 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:44.001 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:44.001 08:23:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:44.002 08:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:44.002 08:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:11:44.002 08:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:44.002 08:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:11:44.002 08:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:44.002 08:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:44.002 rmmod nvme_tcp 00:11:44.002 rmmod nvme_fabrics 00:11:44.002 rmmod nvme_keyring 00:11:44.002 08:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:44.002 08:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:11:44.002 08:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:11:44.002 08:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 177982 ']' 00:11:44.002 08:23:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 177982 00:11:44.002 08:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 177982 ']' 00:11:44.002 08:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 177982 00:11:44.002 08:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:11:44.002 08:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:44.002 08:23:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 177982 00:11:44.002 08:23:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:44.261 08:23:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:44.261 08:23:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 177982' 00:11:44.261 killing process with pid 177982 00:11:44.261 08:23:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 177982 00:11:44.261 [2024-05-15 08:23:31.025683] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:44.261 08:23:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 177982 00:11:44.261 08:23:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:44.261 08:23:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:44.261 08:23:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:44.261 08:23:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:44.261 08:23:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:44.261 08:23:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.261 08:23:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:44.261 08:23:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.792 08:23:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:46.792 00:11:46.792 real 0m9.872s 00:11:46.792 user 0m12.854s 00:11:46.792 sys 0m4.214s 00:11:46.792 08:23:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:46.792 08:23:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.792 ************************************ 00:11:46.792 END TEST nvmf_referrals 00:11:46.792 ************************************ 00:11:46.792 08:23:33 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:46.792 08:23:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:46.792 08:23:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:46.792 08:23:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:46.792 ************************************ 00:11:46.792 START TEST nvmf_connect_disconnect 00:11:46.792 ************************************ 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:46.792 * Looking for test storage... 00:11:46.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.792 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:11:46.793 08:23:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:52.069 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:52.069 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:52.069 Found net devices under 0000:86:00.0: cvl_0_0 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:52.069 Found net devices under 0000:86:00.1: cvl_0_1 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.069 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:52.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:11:52.070 00:11:52.070 --- 10.0.0.2 ping statistics --- 00:11:52.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.070 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:11:52.070 00:11:52.070 --- 10.0.0.1 ping statistics --- 00:11:52.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.070 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:52.070 08:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:52.070 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:52.070 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:52.070 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:52.070 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:52.070 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=181940 00:11:52.070 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 181940 00:11:52.070 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.070 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 181940 ']' 00:11:52.070 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.070 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:52.070 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.070 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:52.070 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:52.070 [2024-05-15 08:23:39.083146] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:11:52.070 [2024-05-15 08:23:39.083192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.330 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.330 [2024-05-15 08:23:39.140270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.330 [2024-05-15 08:23:39.214146] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.330 [2024-05-15 08:23:39.214187] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.330 [2024-05-15 08:23:39.214196] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.330 [2024-05-15 08:23:39.214203] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.330 [2024-05-15 08:23:39.214210] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.330 [2024-05-15 08:23:39.214984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.330 [2024-05-15 08:23:39.215002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.330 [2024-05-15 08:23:39.215108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.330 [2024-05-15 08:23:39.215112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.898 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:52.898 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:11:52.899 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:52.899 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:52.899 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.157 [2024-05-15 08:23:39.928156] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.157 [2024-05-15 08:23:39.979863] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:53.157 [2024-05-15 08:23:39.980115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:53.157 08:23:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:56.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:09.580 rmmod nvme_tcp 00:12:09.580 rmmod nvme_fabrics 00:12:09.580 rmmod nvme_keyring 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 181940 ']' 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 181940 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 181940 ']' 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 181940 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 181940 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 181940' 00:12:09.580 killing process with pid 181940 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 181940 00:12:09.580 [2024-05-15 08:23:56.253748] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 181940 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.580 08:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.114 08:23:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:12.114 00:12:12.114 real 0m25.177s 00:12:12.114 user 1m10.114s 00:12:12.114 sys 0m5.347s 00:12:12.114 08:23:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:12.114 08:23:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:12.114 ************************************ 00:12:12.114 END TEST nvmf_connect_disconnect 00:12:12.114 ************************************ 00:12:12.114 08:23:58 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:12.114 08:23:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:12.114 08:23:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:12.114 08:23:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:12.114 ************************************ 00:12:12.114 START TEST nvmf_multitarget 00:12:12.114 ************************************ 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:12.115 * Looking for test storage... 00:12:12.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:12.115 08:23:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:17.387 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:17.387 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:17.387 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:17.387 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:17.387 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:17.387 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:17.387 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:17.387 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:17.387 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:17.387 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:17.387 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:17.387 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:17.387 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:17.388 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:17.388 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:17.388 Found net devices under 0000:86:00.0: cvl_0_0 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:17.388 Found net devices under 0000:86:00.1: cvl_0_1 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:17.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:12:17.388 00:12:17.388 --- 10.0.0.2 ping statistics --- 00:12:17.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.388 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:12:17.388 00:12:17.388 --- 10.0.0.1 ping statistics --- 00:12:17.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.388 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:17.388 08:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:17.388 08:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:17.388 08:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:17.388 08:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:17.388 08:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:17.388 08:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=188460 00:12:17.388 08:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 188460 00:12:17.388 08:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:17.388 08:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 188460 ']' 00:12:17.388 08:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.388 08:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:17.388 08:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.388 08:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:17.388 08:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:17.388 [2024-05-15 08:24:04.072386] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:12:17.388 [2024-05-15 08:24:04.072424] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.388 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.388 [2024-05-15 08:24:04.130795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.388 [2024-05-15 08:24:04.211081] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.388 [2024-05-15 08:24:04.211119] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.388 [2024-05-15 08:24:04.211133] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.388 [2024-05-15 08:24:04.211139] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.388 [2024-05-15 08:24:04.211145] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.388 [2024-05-15 08:24:04.211213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.389 [2024-05-15 08:24:04.211251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.389 [2024-05-15 08:24:04.211253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.389 [2024-05-15 08:24:04.211231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.955 08:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:17.955 08:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:17.955 08:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:17.955 08:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:17.955 08:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:17.955 08:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.955 08:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:17.955 08:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:17.955 08:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:18.212 08:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:18.212 08:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:18.212 "nvmf_tgt_1" 00:12:18.212 08:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:18.212 "nvmf_tgt_2" 00:12:18.470 08:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:18.470 08:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:18.470 08:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:18.470 08:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:18.470 true 00:12:18.470 08:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:18.727 true 00:12:18.727 08:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:18.727 08:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:18.727 08:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:18.727 08:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:18.727 08:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:18.727 08:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:18.727 08:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:18.727 08:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:18.727 08:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:18.727 08:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:18.727 08:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:18.727 rmmod nvme_tcp 00:12:18.727 rmmod nvme_fabrics 00:12:18.727 rmmod nvme_keyring 00:12:18.727 08:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:18.727 08:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:18.727 08:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:18.727 08:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 188460 ']' 00:12:18.728 08:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 188460 00:12:18.728 08:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 188460 ']' 00:12:18.728 08:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 188460 00:12:18.728 08:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:18.728 08:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:18.728 08:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 188460 00:12:18.986 08:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:18.986 08:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:18.986 08:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 188460' 00:12:18.986 killing process with pid 188460 00:12:18.986 08:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 188460 00:12:18.986 08:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 188460 00:12:18.986 08:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:18.986 08:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:18.986 08:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:18.986 08:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.986 08:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:18.986 08:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.987 08:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.987 08:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.519 08:24:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:21.519 00:12:21.519 real 0m9.435s 00:12:21.519 user 0m9.271s 00:12:21.519 sys 0m4.414s 00:12:21.519 08:24:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:21.519 08:24:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:21.519 ************************************ 00:12:21.519 END TEST nvmf_multitarget 00:12:21.519 ************************************ 00:12:21.519 08:24:08 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:21.519 08:24:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:21.519 08:24:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:21.519 08:24:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:21.519 ************************************ 00:12:21.519 START TEST nvmf_rpc 00:12:21.519 ************************************ 00:12:21.519 08:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:21.519 * Looking for test storage... 00:12:21.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.519 08:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.519 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:21.519 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.519 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.519 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.519 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:21.520 08:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.788 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.788 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:26.788 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:26.788 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:26.788 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:26.788 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:26.788 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:26.788 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:26.788 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:26.789 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:26.789 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:26.789 Found net devices under 0000:86:00.0: cvl_0_0 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:26.789 Found net devices under 0000:86:00.1: cvl_0_1 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:26.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:12:26.789 00:12:26.789 --- 10.0.0.2 ping statistics --- 00:12:26.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.789 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:12:26.789 00:12:26.789 --- 10.0.0.1 ping statistics --- 00:12:26.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.789 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=192631 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 192631 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 192631 ']' 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:26.789 08:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.789 [2024-05-15 08:24:13.667062] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:12:26.789 [2024-05-15 08:24:13.667101] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.789 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.789 [2024-05-15 08:24:13.723480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.789 [2024-05-15 08:24:13.804099] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.789 [2024-05-15 08:24:13.804133] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.789 [2024-05-15 08:24:13.804143] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.789 [2024-05-15 08:24:13.804149] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.789 [2024-05-15 08:24:13.804154] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.789 [2024-05-15 08:24:13.804199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.789 [2024-05-15 08:24:13.804224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.790 [2024-05-15 08:24:13.804313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.790 [2024-05-15 08:24:13.804314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:27.727 "tick_rate": 2300000000, 00:12:27.727 "poll_groups": [ 00:12:27.727 { 00:12:27.727 "name": "nvmf_tgt_poll_group_000", 00:12:27.727 "admin_qpairs": 0, 00:12:27.727 "io_qpairs": 0, 00:12:27.727 "current_admin_qpairs": 0, 00:12:27.727 "current_io_qpairs": 0, 00:12:27.727 "pending_bdev_io": 0, 00:12:27.727 "completed_nvme_io": 0, 00:12:27.727 "transports": [] 00:12:27.727 }, 00:12:27.727 { 00:12:27.727 "name": "nvmf_tgt_poll_group_001", 00:12:27.727 "admin_qpairs": 0, 00:12:27.727 "io_qpairs": 0, 00:12:27.727 "current_admin_qpairs": 0, 00:12:27.727 "current_io_qpairs": 0, 00:12:27.727 "pending_bdev_io": 0, 00:12:27.727 "completed_nvme_io": 0, 00:12:27.727 "transports": [] 00:12:27.727 }, 00:12:27.727 { 00:12:27.727 "name": "nvmf_tgt_poll_group_002", 00:12:27.727 "admin_qpairs": 0, 00:12:27.727 "io_qpairs": 0, 00:12:27.727 "current_admin_qpairs": 0, 00:12:27.727 "current_io_qpairs": 0, 00:12:27.727 "pending_bdev_io": 0, 00:12:27.727 "completed_nvme_io": 0, 00:12:27.727 "transports": [] 00:12:27.727 }, 00:12:27.727 { 00:12:27.727 "name": "nvmf_tgt_poll_group_003", 00:12:27.727 "admin_qpairs": 0, 00:12:27.727 "io_qpairs": 0, 00:12:27.727 "current_admin_qpairs": 0, 00:12:27.727 "current_io_qpairs": 0, 00:12:27.727 "pending_bdev_io": 0, 00:12:27.727 "completed_nvme_io": 0, 00:12:27.727 "transports": [] 00:12:27.727 } 00:12:27.727 ] 00:12:27.727 }' 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.727 [2024-05-15 08:24:14.622506] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:27.727 "tick_rate": 2300000000, 00:12:27.727 "poll_groups": [ 00:12:27.727 { 00:12:27.727 "name": "nvmf_tgt_poll_group_000", 00:12:27.727 "admin_qpairs": 0, 00:12:27.727 "io_qpairs": 0, 00:12:27.727 "current_admin_qpairs": 0, 00:12:27.727 "current_io_qpairs": 0, 00:12:27.727 "pending_bdev_io": 0, 00:12:27.727 "completed_nvme_io": 0, 00:12:27.727 "transports": [ 00:12:27.727 { 00:12:27.727 "trtype": "TCP" 00:12:27.727 } 00:12:27.727 ] 00:12:27.727 }, 00:12:27.727 { 00:12:27.727 "name": "nvmf_tgt_poll_group_001", 00:12:27.727 "admin_qpairs": 0, 00:12:27.727 "io_qpairs": 0, 00:12:27.727 "current_admin_qpairs": 0, 00:12:27.727 "current_io_qpairs": 0, 00:12:27.727 "pending_bdev_io": 0, 00:12:27.727 "completed_nvme_io": 0, 00:12:27.727 "transports": [ 00:12:27.727 { 00:12:27.727 "trtype": "TCP" 00:12:27.727 } 00:12:27.727 ] 00:12:27.727 }, 00:12:27.727 { 00:12:27.727 "name": "nvmf_tgt_poll_group_002", 00:12:27.727 "admin_qpairs": 0, 00:12:27.727 "io_qpairs": 0, 00:12:27.727 "current_admin_qpairs": 0, 00:12:27.727 "current_io_qpairs": 0, 00:12:27.727 "pending_bdev_io": 0, 00:12:27.727 "completed_nvme_io": 0, 00:12:27.727 "transports": [ 00:12:27.727 { 00:12:27.727 "trtype": "TCP" 00:12:27.727 } 00:12:27.727 ] 00:12:27.727 }, 00:12:27.727 { 00:12:27.727 "name": "nvmf_tgt_poll_group_003", 00:12:27.727 "admin_qpairs": 0, 00:12:27.727 "io_qpairs": 0, 00:12:27.727 "current_admin_qpairs": 0, 00:12:27.727 "current_io_qpairs": 0, 00:12:27.727 "pending_bdev_io": 0, 00:12:27.727 "completed_nvme_io": 0, 00:12:27.727 "transports": [ 00:12:27.727 { 00:12:27.727 "trtype": "TCP" 00:12:27.727 } 00:12:27.727 ] 00:12:27.727 } 00:12:27.727 ] 00:12:27.727 }' 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.727 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.985 Malloc1 00:12:27.985 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.985 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:27.985 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.985 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.985 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.985 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:27.985 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.985 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.985 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.985 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:27.985 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.985 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.986 [2024-05-15 08:24:14.786263] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:27.986 [2024-05-15 08:24:14.786486] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:27.986 [2024-05-15 08:24:14.814898] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:27.986 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:27.986 could not add new controller: failed to write to nvme-fabrics device 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.986 08:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.379 08:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.379 08:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:29.379 08:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.379 08:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:29.379 08:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.281 [2024-05-15 08:24:18.238045] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:31.281 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:31.281 could not add new controller: failed to write to nvme-fabrics device 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.281 08:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.655 08:24:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.655 08:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:32.655 08:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.655 08:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:32.655 08:24:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:34.553 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:34.553 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:34.553 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.553 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:34.553 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.553 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:34.553 08:24:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.553 08:24:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.553 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:34.553 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:34.553 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.553 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:34.553 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.553 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:34.554 08:24:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.554 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.554 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.554 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.554 08:24:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:34.554 08:24:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:34.554 08:24:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.554 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.554 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.811 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.812 08:24:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.812 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.812 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.812 [2024-05-15 08:24:21.584719] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.812 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.812 08:24:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:34.812 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.812 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.812 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.812 08:24:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.812 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.812 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.812 08:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.812 08:24:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.184 08:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.184 08:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:36.184 08:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.184 08:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:36.184 08:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.080 [2024-05-15 08:24:24.922800] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.080 08:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.013 08:24:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.013 08:24:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:39.013 08:24:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.013 08:24:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:39.013 08:24:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.542 [2024-05-15 08:24:28.175211] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.542 08:24:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.475 08:24:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.475 08:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:42.475 08:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.475 08:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:42.475 08:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:44.376 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:44.376 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:44.376 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.376 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:44.376 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.376 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:44.376 08:24:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.376 08:24:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.376 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:44.376 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:44.376 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.634 [2024-05-15 08:24:31.448842] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.634 08:24:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.567 08:24:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.567 08:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:45.568 08:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.568 08:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:45.568 08:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.096 [2024-05-15 08:24:34.753971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.096 08:24:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.031 08:24:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.031 08:24:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:49.031 08:24:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.031 08:24:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:49.031 08:24:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:50.930 08:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:50.930 08:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:50.930 08:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.189 08:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:51.189 08:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.189 08:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:51.189 08:24:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 [2024-05-15 08:24:38.097868] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 [2024-05-15 08:24:38.145956] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 [2024-05-15 08:24:38.198133] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.189 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.448 [2024-05-15 08:24:38.246308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.448 [2024-05-15 08:24:38.294479] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.448 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:51.448 "tick_rate": 2300000000, 00:12:51.448 "poll_groups": [ 00:12:51.448 { 00:12:51.448 "name": "nvmf_tgt_poll_group_000", 00:12:51.448 "admin_qpairs": 2, 00:12:51.448 "io_qpairs": 168, 00:12:51.448 "current_admin_qpairs": 0, 00:12:51.448 "current_io_qpairs": 0, 00:12:51.448 "pending_bdev_io": 0, 00:12:51.448 "completed_nvme_io": 220, 00:12:51.448 "transports": [ 00:12:51.448 { 00:12:51.448 "trtype": "TCP" 00:12:51.448 } 00:12:51.448 ] 00:12:51.448 }, 00:12:51.448 { 00:12:51.448 "name": "nvmf_tgt_poll_group_001", 00:12:51.448 "admin_qpairs": 2, 00:12:51.448 "io_qpairs": 168, 00:12:51.448 "current_admin_qpairs": 0, 00:12:51.448 "current_io_qpairs": 0, 00:12:51.448 "pending_bdev_io": 0, 00:12:51.448 "completed_nvme_io": 271, 00:12:51.448 "transports": [ 00:12:51.448 { 00:12:51.448 "trtype": "TCP" 00:12:51.448 } 00:12:51.448 ] 00:12:51.448 }, 00:12:51.448 { 00:12:51.448 "name": "nvmf_tgt_poll_group_002", 00:12:51.448 "admin_qpairs": 1, 00:12:51.448 "io_qpairs": 168, 00:12:51.448 "current_admin_qpairs": 0, 00:12:51.448 "current_io_qpairs": 0, 00:12:51.448 "pending_bdev_io": 0, 00:12:51.448 "completed_nvme_io": 313, 00:12:51.448 "transports": [ 00:12:51.448 { 00:12:51.448 "trtype": "TCP" 00:12:51.448 } 00:12:51.448 ] 00:12:51.448 }, 00:12:51.448 { 00:12:51.448 "name": "nvmf_tgt_poll_group_003", 00:12:51.448 "admin_qpairs": 2, 00:12:51.448 "io_qpairs": 168, 00:12:51.449 "current_admin_qpairs": 0, 00:12:51.449 "current_io_qpairs": 0, 00:12:51.449 "pending_bdev_io": 0, 00:12:51.449 "completed_nvme_io": 218, 00:12:51.449 "transports": [ 00:12:51.449 { 00:12:51.449 "trtype": "TCP" 00:12:51.449 } 00:12:51.449 ] 00:12:51.449 } 00:12:51.449 ] 00:12:51.449 }' 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:51.449 08:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:51.449 rmmod nvme_tcp 00:12:51.449 rmmod nvme_fabrics 00:12:51.707 rmmod nvme_keyring 00:12:51.707 08:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:51.707 08:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:51.707 08:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:51.707 08:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 192631 ']' 00:12:51.707 08:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 192631 00:12:51.707 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 192631 ']' 00:12:51.707 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 192631 00:12:51.707 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:12:51.707 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:51.707 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 192631 00:12:51.707 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:51.707 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:51.707 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 192631' 00:12:51.707 killing process with pid 192631 00:12:51.707 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 192631 00:12:51.707 [2024-05-15 08:24:38.549737] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:51.707 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 192631 00:12:51.979 08:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:51.979 08:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:51.979 08:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:51.979 08:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:51.979 08:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:51.979 08:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.979 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.979 08:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.880 08:24:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:53.880 00:12:53.880 real 0m32.707s 00:12:53.880 user 1m41.127s 00:12:53.880 sys 0m5.749s 00:12:53.880 08:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:53.880 08:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.880 ************************************ 00:12:53.880 END TEST nvmf_rpc 00:12:53.880 ************************************ 00:12:53.880 08:24:40 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:53.880 08:24:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:53.880 08:24:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:53.880 08:24:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:54.138 ************************************ 00:12:54.138 START TEST nvmf_invalid 00:12:54.138 ************************************ 00:12:54.138 08:24:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:54.138 * Looking for test storage... 00:12:54.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.138 08:24:41 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.139 08:24:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:59.404 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:59.404 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:59.405 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:59.405 Found net devices under 0000:86:00.0: cvl_0_0 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:59.405 Found net devices under 0000:86:00.1: cvl_0_1 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:59.405 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:59.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:12:59.664 00:12:59.664 --- 10.0.0.2 ping statistics --- 00:12:59.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.664 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:59.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:12:59.664 00:12:59.664 --- 10.0.0.1 ping statistics --- 00:12:59.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.664 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=200455 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 200455 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 200455 ']' 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:59.664 08:24:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.665 08:24:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:59.665 08:24:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:59.665 08:24:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:59.665 [2024-05-15 08:24:46.541676] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:12:59.665 [2024-05-15 08:24:46.541718] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.665 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.665 [2024-05-15 08:24:46.598699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.665 [2024-05-15 08:24:46.678718] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.665 [2024-05-15 08:24:46.678751] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.665 [2024-05-15 08:24:46.678758] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.665 [2024-05-15 08:24:46.678764] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.665 [2024-05-15 08:24:46.678769] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.665 [2024-05-15 08:24:46.678812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.665 [2024-05-15 08:24:46.678912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.665 [2024-05-15 08:24:46.678973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.665 [2024-05-15 08:24:46.678975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.601 08:24:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:00.601 08:24:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:13:00.601 08:24:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:00.601 08:24:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:00.601 08:24:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:00.601 08:24:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.601 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:00.601 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode4162 00:13:00.601 [2024-05-15 08:24:47.552476] nvmf_rpc.c: 391:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:00.601 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:00.601 { 00:13:00.601 "nqn": "nqn.2016-06.io.spdk:cnode4162", 00:13:00.601 "tgt_name": "foobar", 00:13:00.601 "method": "nvmf_create_subsystem", 00:13:00.601 "req_id": 1 00:13:00.601 } 00:13:00.601 Got JSON-RPC error response 00:13:00.601 response: 00:13:00.601 { 00:13:00.601 "code": -32603, 00:13:00.601 "message": "Unable to find target foobar" 00:13:00.601 }' 00:13:00.601 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:00.601 { 00:13:00.601 "nqn": "nqn.2016-06.io.spdk:cnode4162", 00:13:00.601 "tgt_name": "foobar", 00:13:00.601 "method": "nvmf_create_subsystem", 00:13:00.601 "req_id": 1 00:13:00.601 } 00:13:00.601 Got JSON-RPC error response 00:13:00.601 response: 00:13:00.601 { 00:13:00.601 "code": -32603, 00:13:00.601 "message": "Unable to find target foobar" 00:13:00.601 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:00.601 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:00.601 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16065 00:13:00.860 [2024-05-15 08:24:47.733134] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16065: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:00.860 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:00.860 { 00:13:00.860 "nqn": "nqn.2016-06.io.spdk:cnode16065", 00:13:00.860 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:00.860 "method": "nvmf_create_subsystem", 00:13:00.860 "req_id": 1 00:13:00.860 } 00:13:00.860 Got JSON-RPC error response 00:13:00.860 response: 00:13:00.860 { 00:13:00.860 "code": -32602, 00:13:00.860 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:00.860 }' 00:13:00.860 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:00.860 { 00:13:00.860 "nqn": "nqn.2016-06.io.spdk:cnode16065", 00:13:00.860 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:00.860 "method": "nvmf_create_subsystem", 00:13:00.860 "req_id": 1 00:13:00.860 } 00:13:00.860 Got JSON-RPC error response 00:13:00.860 response: 00:13:00.860 { 00:13:00.860 "code": -32602, 00:13:00.860 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:00.860 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:00.860 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:00.860 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29732 00:13:01.120 [2024-05-15 08:24:47.921739] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29732: invalid model number 'SPDK_Controller' 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:01.120 { 00:13:01.120 "nqn": "nqn.2016-06.io.spdk:cnode29732", 00:13:01.120 "model_number": "SPDK_Controller\u001f", 00:13:01.120 "method": "nvmf_create_subsystem", 00:13:01.120 "req_id": 1 00:13:01.120 } 00:13:01.120 Got JSON-RPC error response 00:13:01.120 response: 00:13:01.120 { 00:13:01.120 "code": -32602, 00:13:01.120 "message": "Invalid MN SPDK_Controller\u001f" 00:13:01.120 }' 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:01.120 { 00:13:01.120 "nqn": "nqn.2016-06.io.spdk:cnode29732", 00:13:01.120 "model_number": "SPDK_Controller\u001f", 00:13:01.120 "method": "nvmf_create_subsystem", 00:13:01.120 "req_id": 1 00:13:01.120 } 00:13:01.120 Got JSON-RPC error response 00:13:01.120 response: 00:13:01.120 { 00:13:01.120 "code": -32602, 00:13:01.120 "message": "Invalid MN SPDK_Controller\u001f" 00:13:01.120 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:01.120 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:01.121 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:01.121 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.121 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.121 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:01.121 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:01.121 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:01.121 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.121 08:24:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ - == \- ]] 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@29 -- # string='\-+u-{V8y.Sya?r@=lK#' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '\-+u-{V8y.Sya?r@=lK#' 00:13:01.121 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '\-+u-{V8y.Sya?r@=lK#' nqn.2016-06.io.spdk:cnode25416 00:13:01.380 [2024-05-15 08:24:48.238851] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25416: invalid serial number '\-+u-{V8y.Sya?r@=lK#' 00:13:01.380 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:01.380 { 00:13:01.380 "nqn": "nqn.2016-06.io.spdk:cnode25416", 00:13:01.380 "serial_number": "\\-+u-\u007f{V8y.S\u007fya?r@=lK#", 00:13:01.380 "method": "nvmf_create_subsystem", 00:13:01.380 "req_id": 1 00:13:01.380 } 00:13:01.380 Got JSON-RPC error response 00:13:01.380 response: 00:13:01.380 { 00:13:01.380 "code": -32602, 00:13:01.380 "message": "Invalid SN \\-+u-\u007f{V8y.S\u007fya?r@=lK#" 00:13:01.380 }' 00:13:01.380 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:01.380 { 00:13:01.380 "nqn": "nqn.2016-06.io.spdk:cnode25416", 00:13:01.380 "serial_number": "\\-+u-\u007f{V8y.S\u007fya?r@=lK#", 00:13:01.380 "method": "nvmf_create_subsystem", 00:13:01.380 "req_id": 1 00:13:01.380 } 00:13:01.380 Got JSON-RPC error response 00:13:01.380 response: 00:13:01.380 { 00:13:01.380 "code": -32602, 00:13:01.380 "message": "Invalid SN \\-+u-\u007f{V8y.S\u007fya?r@=lK#" 00:13:01.380 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:01.380 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:01.380 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:01.380 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:01.380 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:01.380 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.381 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ l == \- ]] 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'l}AlrGj1/zS"q2-+c|o5ZoFRj;Ly5|&Kie.XqY$zC' 00:13:01.641 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'l}AlrGj1/zS"q2-+c|o5ZoFRj;Ly5|&Kie.XqY$zC' nqn.2016-06.io.spdk:cnode23335 00:13:01.901 [2024-05-15 08:24:48.684379] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23335: invalid model number 'l}AlrGj1/zS"q2-+c|o5ZoFRj;Ly5|&Kie.XqY$zC' 00:13:01.901 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:01.901 { 00:13:01.901 "nqn": "nqn.2016-06.io.spdk:cnode23335", 00:13:01.901 "model_number": "l}AlrGj1/zS\"q2-+c|o5ZoFRj;Ly5|&Kie.XqY$zC", 00:13:01.901 "method": "nvmf_create_subsystem", 00:13:01.901 "req_id": 1 00:13:01.901 } 00:13:01.901 Got JSON-RPC error response 00:13:01.901 response: 00:13:01.901 { 00:13:01.901 "code": -32602, 00:13:01.901 "message": "Invalid MN l}AlrGj1/zS\"q2-+c|o5ZoFRj;Ly5|&Kie.XqY$zC" 00:13:01.901 }' 00:13:01.901 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:01.901 { 00:13:01.901 "nqn": "nqn.2016-06.io.spdk:cnode23335", 00:13:01.901 "model_number": "l}AlrGj1/zS\"q2-+c|o5ZoFRj;Ly5|&Kie.XqY$zC", 00:13:01.901 "method": "nvmf_create_subsystem", 00:13:01.901 "req_id": 1 00:13:01.901 } 00:13:01.901 Got JSON-RPC error response 00:13:01.901 response: 00:13:01.901 { 00:13:01.901 "code": -32602, 00:13:01.901 "message": "Invalid MN l}AlrGj1/zS\"q2-+c|o5ZoFRj;Ly5|&Kie.XqY$zC" 00:13:01.901 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:01.901 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:01.901 [2024-05-15 08:24:48.877071] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.901 08:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:02.160 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:02.160 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:02.160 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:02.160 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:02.160 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:02.420 [2024-05-15 08:24:49.250424] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:02.420 [2024-05-15 08:24:49.250488] nvmf_rpc.c: 789:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:02.420 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:02.420 { 00:13:02.420 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:02.420 "listen_address": { 00:13:02.420 "trtype": "tcp", 00:13:02.420 "traddr": "", 00:13:02.420 "trsvcid": "4421" 00:13:02.420 }, 00:13:02.420 "method": "nvmf_subsystem_remove_listener", 00:13:02.420 "req_id": 1 00:13:02.420 } 00:13:02.420 Got JSON-RPC error response 00:13:02.420 response: 00:13:02.420 { 00:13:02.420 "code": -32602, 00:13:02.420 "message": "Invalid parameters" 00:13:02.420 }' 00:13:02.420 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:02.420 { 00:13:02.420 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:02.420 "listen_address": { 00:13:02.420 "trtype": "tcp", 00:13:02.420 "traddr": "", 00:13:02.420 "trsvcid": "4421" 00:13:02.420 }, 00:13:02.420 "method": "nvmf_subsystem_remove_listener", 00:13:02.420 "req_id": 1 00:13:02.420 } 00:13:02.420 Got JSON-RPC error response 00:13:02.420 response: 00:13:02.420 { 00:13:02.420 "code": -32602, 00:13:02.420 "message": "Invalid parameters" 00:13:02.420 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:02.420 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19108 -i 0 00:13:02.420 [2024-05-15 08:24:49.435055] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19108: invalid cntlid range [0-65519] 00:13:02.679 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:02.679 { 00:13:02.679 "nqn": "nqn.2016-06.io.spdk:cnode19108", 00:13:02.679 "min_cntlid": 0, 00:13:02.679 "method": "nvmf_create_subsystem", 00:13:02.679 "req_id": 1 00:13:02.679 } 00:13:02.679 Got JSON-RPC error response 00:13:02.679 response: 00:13:02.679 { 00:13:02.679 "code": -32602, 00:13:02.679 "message": "Invalid cntlid range [0-65519]" 00:13:02.679 }' 00:13:02.679 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:02.679 { 00:13:02.679 "nqn": "nqn.2016-06.io.spdk:cnode19108", 00:13:02.679 "min_cntlid": 0, 00:13:02.679 "method": "nvmf_create_subsystem", 00:13:02.679 "req_id": 1 00:13:02.679 } 00:13:02.679 Got JSON-RPC error response 00:13:02.679 response: 00:13:02.679 { 00:13:02.679 "code": -32602, 00:13:02.679 "message": "Invalid cntlid range [0-65519]" 00:13:02.679 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:02.679 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17257 -i 65520 00:13:02.679 [2024-05-15 08:24:49.603615] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17257: invalid cntlid range [65520-65519] 00:13:02.679 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:02.679 { 00:13:02.679 "nqn": "nqn.2016-06.io.spdk:cnode17257", 00:13:02.679 "min_cntlid": 65520, 00:13:02.679 "method": "nvmf_create_subsystem", 00:13:02.679 "req_id": 1 00:13:02.679 } 00:13:02.679 Got JSON-RPC error response 00:13:02.679 response: 00:13:02.679 { 00:13:02.680 "code": -32602, 00:13:02.680 "message": "Invalid cntlid range [65520-65519]" 00:13:02.680 }' 00:13:02.680 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:02.680 { 00:13:02.680 "nqn": "nqn.2016-06.io.spdk:cnode17257", 00:13:02.680 "min_cntlid": 65520, 00:13:02.680 "method": "nvmf_create_subsystem", 00:13:02.680 "req_id": 1 00:13:02.680 } 00:13:02.680 Got JSON-RPC error response 00:13:02.680 response: 00:13:02.680 { 00:13:02.680 "code": -32602, 00:13:02.680 "message": "Invalid cntlid range [65520-65519]" 00:13:02.680 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:02.680 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4699 -I 0 00:13:02.939 [2024-05-15 08:24:49.780273] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4699: invalid cntlid range [1-0] 00:13:02.939 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:02.939 { 00:13:02.939 "nqn": "nqn.2016-06.io.spdk:cnode4699", 00:13:02.939 "max_cntlid": 0, 00:13:02.939 "method": "nvmf_create_subsystem", 00:13:02.939 "req_id": 1 00:13:02.939 } 00:13:02.939 Got JSON-RPC error response 00:13:02.939 response: 00:13:02.939 { 00:13:02.939 "code": -32602, 00:13:02.939 "message": "Invalid cntlid range [1-0]" 00:13:02.939 }' 00:13:02.939 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:02.939 { 00:13:02.939 "nqn": "nqn.2016-06.io.spdk:cnode4699", 00:13:02.939 "max_cntlid": 0, 00:13:02.939 "method": "nvmf_create_subsystem", 00:13:02.939 "req_id": 1 00:13:02.939 } 00:13:02.939 Got JSON-RPC error response 00:13:02.939 response: 00:13:02.939 { 00:13:02.939 "code": -32602, 00:13:02.939 "message": "Invalid cntlid range [1-0]" 00:13:02.939 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:02.939 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5657 -I 65520 00:13:02.939 [2024-05-15 08:24:49.956820] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5657: invalid cntlid range [1-65520] 00:13:03.199 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:03.199 { 00:13:03.199 "nqn": "nqn.2016-06.io.spdk:cnode5657", 00:13:03.199 "max_cntlid": 65520, 00:13:03.199 "method": "nvmf_create_subsystem", 00:13:03.199 "req_id": 1 00:13:03.199 } 00:13:03.199 Got JSON-RPC error response 00:13:03.199 response: 00:13:03.199 { 00:13:03.199 "code": -32602, 00:13:03.199 "message": "Invalid cntlid range [1-65520]" 00:13:03.199 }' 00:13:03.199 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:03.199 { 00:13:03.199 "nqn": "nqn.2016-06.io.spdk:cnode5657", 00:13:03.199 "max_cntlid": 65520, 00:13:03.199 "method": "nvmf_create_subsystem", 00:13:03.199 "req_id": 1 00:13:03.199 } 00:13:03.199 Got JSON-RPC error response 00:13:03.199 response: 00:13:03.199 { 00:13:03.199 "code": -32602, 00:13:03.199 "message": "Invalid cntlid range [1-65520]" 00:13:03.199 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.199 08:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20729 -i 6 -I 5 00:13:03.199 [2024-05-15 08:24:50.153497] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20729: invalid cntlid range [6-5] 00:13:03.199 08:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:03.199 { 00:13:03.199 "nqn": "nqn.2016-06.io.spdk:cnode20729", 00:13:03.199 "min_cntlid": 6, 00:13:03.199 "max_cntlid": 5, 00:13:03.199 "method": "nvmf_create_subsystem", 00:13:03.199 "req_id": 1 00:13:03.199 } 00:13:03.199 Got JSON-RPC error response 00:13:03.199 response: 00:13:03.199 { 00:13:03.199 "code": -32602, 00:13:03.199 "message": "Invalid cntlid range [6-5]" 00:13:03.199 }' 00:13:03.199 08:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:03.199 { 00:13:03.199 "nqn": "nqn.2016-06.io.spdk:cnode20729", 00:13:03.199 "min_cntlid": 6, 00:13:03.199 "max_cntlid": 5, 00:13:03.199 "method": "nvmf_create_subsystem", 00:13:03.199 "req_id": 1 00:13:03.199 } 00:13:03.199 Got JSON-RPC error response 00:13:03.199 response: 00:13:03.199 { 00:13:03.199 "code": -32602, 00:13:03.199 "message": "Invalid cntlid range [6-5]" 00:13:03.199 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.199 08:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:03.459 { 00:13:03.459 "name": "foobar", 00:13:03.459 "method": "nvmf_delete_target", 00:13:03.459 "req_id": 1 00:13:03.459 } 00:13:03.459 Got JSON-RPC error response 00:13:03.459 response: 00:13:03.459 { 00:13:03.459 "code": -32602, 00:13:03.459 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:03.459 }' 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:03.459 { 00:13:03.459 "name": "foobar", 00:13:03.459 "method": "nvmf_delete_target", 00:13:03.459 "req_id": 1 00:13:03.459 } 00:13:03.459 Got JSON-RPC error response 00:13:03.459 response: 00:13:03.459 { 00:13:03.459 "code": -32602, 00:13:03.459 "message": "The specified target doesn't exist, cannot delete it." 00:13:03.459 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:03.459 rmmod nvme_tcp 00:13:03.459 rmmod nvme_fabrics 00:13:03.459 rmmod nvme_keyring 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 200455 ']' 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 200455 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 200455 ']' 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 200455 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 200455 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 200455' 00:13:03.459 killing process with pid 200455 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 200455 00:13:03.459 [2024-05-15 08:24:50.418239] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:03.459 08:24:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 200455 00:13:03.719 08:24:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:03.719 08:24:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:03.719 08:24:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:03.719 08:24:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:03.719 08:24:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:03.719 08:24:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.719 08:24:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.719 08:24:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.261 08:24:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:06.261 00:13:06.261 real 0m11.775s 00:13:06.261 user 0m19.483s 00:13:06.261 sys 0m5.019s 00:13:06.261 08:24:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:06.261 08:24:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:06.261 ************************************ 00:13:06.261 END TEST nvmf_invalid 00:13:06.261 ************************************ 00:13:06.261 08:24:52 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:06.261 08:24:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:06.261 08:24:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:06.261 08:24:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:06.261 ************************************ 00:13:06.261 START TEST nvmf_abort 00:13:06.261 ************************************ 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:06.261 * Looking for test storage... 00:13:06.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.261 08:24:52 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:06.262 08:24:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.531 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.531 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:11.531 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:11.531 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:11.531 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:11.531 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:11.531 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:11.531 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:11.531 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:11.532 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:11.532 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:11.532 Found net devices under 0000:86:00.0: cvl_0_0 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:11.532 Found net devices under 0000:86:00.1: cvl_0_1 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:11.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:13:11.532 00:13:11.532 --- 10.0.0.2 ping statistics --- 00:13:11.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.532 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:11.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:13:11.532 00:13:11.532 --- 10.0.0.1 ping statistics --- 00:13:11.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.532 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:11.532 08:24:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:11.532 08:24:58 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:11.532 08:24:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:11.532 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:11.532 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.532 08:24:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=204615 00:13:11.532 08:24:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 204615 00:13:11.532 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 204615 ']' 00:13:11.533 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.533 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:11.533 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.533 08:24:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:11.533 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:11.533 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.533 [2024-05-15 08:24:58.062440] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:13:11.533 [2024-05-15 08:24:58.062483] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.533 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.533 [2024-05-15 08:24:58.119678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:11.533 [2024-05-15 08:24:58.198728] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.533 [2024-05-15 08:24:58.198762] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.533 [2024-05-15 08:24:58.198769] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.533 [2024-05-15 08:24:58.198778] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.533 [2024-05-15 08:24:58.198783] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.533 [2024-05-15 08:24:58.198882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.533 [2024-05-15 08:24:58.198967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.533 [2024-05-15 08:24:58.198969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.099 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:12.099 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:12.100 [2024-05-15 08:24:58.907412] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:12.100 Malloc0 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:12.100 Delay0 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:12.100 [2024-05-15 08:24:58.976666] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:12.100 [2024-05-15 08:24:58.976892] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.100 08:24:58 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:12.100 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.100 [2024-05-15 08:24:59.081935] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:14.628 Initializing NVMe Controllers 00:13:14.628 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:14.628 controller IO queue size 128 less than required 00:13:14.628 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:14.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:14.628 Initialization complete. Launching workers. 00:13:14.628 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 125, failed: 45732 00:13:14.628 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 45795, failed to submit 62 00:13:14.628 success 45736, unsuccess 59, failed 0 00:13:14.628 08:25:01 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:14.628 08:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.628 08:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.628 08:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.628 08:25:01 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:14.628 08:25:01 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:14.629 rmmod nvme_tcp 00:13:14.629 rmmod nvme_fabrics 00:13:14.629 rmmod nvme_keyring 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 204615 ']' 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 204615 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 204615 ']' 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 204615 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 204615 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 204615' 00:13:14.629 killing process with pid 204615 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 204615 00:13:14.629 [2024-05-15 08:25:01.243468] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 204615 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.629 08:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.534 08:25:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:16.534 00:13:16.534 real 0m10.758s 00:13:16.534 user 0m13.115s 00:13:16.534 sys 0m4.473s 00:13:16.534 08:25:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:16.534 08:25:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:16.534 ************************************ 00:13:16.534 END TEST nvmf_abort 00:13:16.534 ************************************ 00:13:16.794 08:25:03 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:16.794 08:25:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:16.794 08:25:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:16.794 08:25:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:16.794 ************************************ 00:13:16.794 START TEST nvmf_ns_hotplug_stress 00:13:16.794 ************************************ 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:16.794 * Looking for test storage... 00:13:16.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:16.794 08:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:22.111 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:22.111 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:22.111 Found net devices under 0000:86:00.0: cvl_0_0 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:22.111 Found net devices under 0000:86:00.1: cvl_0_1 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.111 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:22.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:13:22.112 00:13:22.112 --- 10.0.0.2 ping statistics --- 00:13:22.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.112 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:13:22.112 00:13:22.112 --- 10.0.0.1 ping statistics --- 00:13:22.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.112 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=208608 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 208608 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 208608 ']' 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:22.112 08:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.112 [2024-05-15 08:25:08.585597] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:13:22.112 [2024-05-15 08:25:08.585641] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.112 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.112 [2024-05-15 08:25:08.641645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:22.112 [2024-05-15 08:25:08.720307] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.112 [2024-05-15 08:25:08.720340] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.112 [2024-05-15 08:25:08.720348] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.112 [2024-05-15 08:25:08.720353] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.112 [2024-05-15 08:25:08.720358] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.112 [2024-05-15 08:25:08.720395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.112 [2024-05-15 08:25:08.720413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.112 [2024-05-15 08:25:08.720415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.679 08:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:22.679 08:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:22.679 08:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:22.679 08:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:22.679 08:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.679 08:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.679 08:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:22.679 08:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:22.679 [2024-05-15 08:25:09.592603] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.679 08:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:22.937 08:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.937 [2024-05-15 08:25:09.949735] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:22.937 [2024-05-15 08:25:09.949937] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.195 08:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:23.195 08:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:23.453 Malloc0 00:13:23.453 08:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:23.711 Delay0 00:13:23.711 08:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.969 08:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:23.969 NULL1 00:13:23.969 08:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:24.227 08:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:24.227 08:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=209111 00:13:24.227 08:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:24.227 08:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.227 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.486 08:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.486 08:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:24.486 08:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:24.744 true 00:13:24.744 08:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:24.744 08:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.002 08:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.260 08:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:25.260 08:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:25.260 true 00:13:25.260 08:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:25.260 08:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.518 08:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.776 08:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:25.776 08:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:26.034 true 00:13:26.034 08:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:26.034 08:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.034 08:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.302 08:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:26.302 08:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:26.565 true 00:13:26.565 08:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:26.565 08:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.824 08:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.824 08:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:26.824 08:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:27.082 true 00:13:27.082 08:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:27.082 08:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.340 08:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.598 08:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:27.598 08:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:27.598 true 00:13:27.598 08:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:27.598 08:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.856 08:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.115 08:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:28.115 08:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:28.115 true 00:13:28.115 08:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:28.115 08:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.373 08:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.632 08:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:28.632 08:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:28.890 true 00:13:28.890 08:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:28.890 08:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.890 08:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.148 08:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:29.148 08:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:29.406 true 00:13:29.406 08:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:29.406 08:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.664 08:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.664 08:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:29.664 08:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:29.922 true 00:13:29.922 08:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:29.922 08:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.180 08:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.438 08:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:30.438 08:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:30.438 true 00:13:30.438 08:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:30.438 08:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.697 08:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.955 08:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:30.955 08:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:31.214 true 00:13:31.214 08:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:31.214 08:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.214 08:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.472 08:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:31.472 08:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:31.730 true 00:13:31.730 08:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:31.730 08:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.988 08:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.246 08:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:32.246 08:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:32.246 true 00:13:32.246 08:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:32.246 08:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.505 08:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.763 08:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:32.763 08:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:33.021 true 00:13:33.021 08:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:33.021 08:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.021 08:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.279 08:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:33.279 08:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:33.537 true 00:13:33.537 08:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:33.537 08:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.796 08:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.796 08:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:33.796 08:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:34.055 true 00:13:34.055 08:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:34.055 08:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.313 08:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.572 08:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:34.572 08:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:34.572 true 00:13:34.572 08:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:34.572 08:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.830 08:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.090 08:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:35.090 08:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:35.349 true 00:13:35.349 08:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:35.349 08:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.608 08:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.608 08:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:35.608 08:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:35.866 true 00:13:35.866 08:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:35.866 08:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.124 08:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.382 08:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:36.382 08:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:36.382 true 00:13:36.382 08:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:36.382 08:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.641 08:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.900 08:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:36.900 08:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:37.159 true 00:13:37.159 08:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:37.159 08:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.159 08:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.417 08:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:37.417 08:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:37.675 true 00:13:37.675 08:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:37.675 08:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.934 08:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.192 08:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:38.192 08:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:38.192 true 00:13:38.192 08:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:38.192 08:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.450 08:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.708 08:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:38.708 08:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:38.708 true 00:13:38.966 08:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:38.966 08:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.966 08:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.224 08:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:39.224 08:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:39.482 true 00:13:39.482 08:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:39.482 08:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.740 08:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.740 08:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:39.740 08:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:39.998 true 00:13:39.998 08:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:39.998 08:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.257 08:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.516 08:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:40.516 08:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:40.516 true 00:13:40.774 08:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:40.774 08:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.774 08:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.032 08:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:41.032 08:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:41.290 true 00:13:41.290 08:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:41.290 08:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.548 08:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.548 08:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:41.548 08:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:41.806 true 00:13:41.806 08:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:41.806 08:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.065 08:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.323 08:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:13:42.323 08:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:13:42.323 true 00:13:42.323 08:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:42.323 08:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.581 08:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.839 08:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:13:42.839 08:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:13:43.096 true 00:13:43.096 08:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:43.096 08:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.354 08:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.354 08:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:13:43.354 08:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:13:43.613 true 00:13:43.613 08:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:43.613 08:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.872 08:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.130 08:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:13:44.130 08:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:13:44.130 true 00:13:44.130 08:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:44.130 08:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.388 08:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.646 08:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:13:44.646 08:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:13:44.903 true 00:13:44.903 08:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:44.903 08:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.160 08:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.160 08:25:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:13:45.160 08:25:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:13:45.418 true 00:13:45.418 08:25:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:45.418 08:25:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.677 08:25:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.935 08:25:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:13:45.935 08:25:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:13:45.935 true 00:13:45.935 08:25:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:45.935 08:25:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.194 08:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.453 08:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:13:46.453 08:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:13:46.714 true 00:13:46.714 08:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:46.714 08:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.714 08:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.974 08:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:13:46.974 08:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:13:47.232 true 00:13:47.232 08:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:47.232 08:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.491 08:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.749 08:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:13:47.749 08:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:13:47.749 true 00:13:47.749 08:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:47.749 08:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.008 08:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.265 08:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:13:48.265 08:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:13:48.265 true 00:13:48.523 08:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:48.523 08:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.523 08:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.781 08:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:13:48.781 08:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:13:49.039 true 00:13:49.039 08:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:49.039 08:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.297 08:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.297 08:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:13:49.297 08:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:13:49.555 true 00:13:49.555 08:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:49.555 08:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.814 08:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.072 08:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:13:50.072 08:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:13:50.072 true 00:13:50.072 08:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:50.072 08:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.330 08:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.588 08:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:13:50.588 08:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:13:50.846 true 00:13:50.846 08:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:50.846 08:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.846 08:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.104 08:25:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:13:51.104 08:25:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:13:51.362 true 00:13:51.362 08:25:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:51.362 08:25:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.620 08:25:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.878 08:25:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:13:51.878 08:25:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:13:51.878 true 00:13:51.878 08:25:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:51.878 08:25:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.136 08:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.431 08:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:13:52.431 08:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:13:52.431 true 00:13:52.431 08:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:52.431 08:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.690 08:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.949 08:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:13:52.949 08:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:13:53.207 true 00:13:53.207 08:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:53.207 08:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.207 08:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.465 08:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:13:53.465 08:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:13:53.723 true 00:13:53.723 08:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:53.723 08:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.981 08:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.239 08:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:13:54.239 08:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:13:54.239 true 00:13:54.239 08:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:54.239 08:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.499 08:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.499 Initializing NVMe Controllers 00:13:54.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:54.499 Controller IO queue size 128, less than required. 00:13:54.499 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:54.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:54.499 Initialization complete. Launching workers. 00:13:54.499 ======================================================== 00:13:54.499 Latency(us) 00:13:54.499 Device Information : IOPS MiB/s Average min max 00:13:54.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 26501.24 12.94 4830.11 2268.76 8803.18 00:13:54.499 ======================================================== 00:13:54.499 Total : 26501.24 12.94 4830.11 2268.76 8803.18 00:13:54.499 00:13:54.757 08:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:13:54.757 08:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:13:55.016 true 00:13:55.016 08:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 209111 00:13:55.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (209111) - No such process 00:13:55.016 08:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 209111 00:13:55.016 08:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.016 08:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:55.277 08:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:55.277 08:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:55.277 08:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:55.277 08:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.277 08:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:55.536 null0 00:13:55.536 08:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:55.536 08:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.536 08:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:55.536 null1 00:13:55.536 08:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:55.536 08:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.536 08:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:55.794 null2 00:13:55.794 08:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:55.794 08:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.794 08:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:56.052 null3 00:13:56.052 08:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.052 08:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.052 08:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:56.052 null4 00:13:56.052 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.052 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.052 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:56.309 null5 00:13:56.309 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.309 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.309 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:56.567 null6 00:13:56.567 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.567 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.567 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:56.826 null7 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 214713 214715 214718 214721 214725 214728 214731 214734 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:56.826 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:57.084 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.084 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.084 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.084 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.084 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.084 08:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:57.084 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.084 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.084 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:57.084 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.084 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.084 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.084 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.084 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.084 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.084 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.084 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.084 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.084 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.084 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.084 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:57.084 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.084 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.084 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.342 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.342 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:57.342 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:57.342 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:57.342 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.342 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:57.342 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:57.342 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:57.342 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.342 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.343 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.600 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.600 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.600 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.600 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.600 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.600 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:57.600 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.600 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.600 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:57.600 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.600 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.601 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.601 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.601 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.601 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:57.601 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.601 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.601 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.601 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.601 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.601 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.601 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.601 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:57.601 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:57.601 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:57.601 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:57.601 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:57.601 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.601 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:57.858 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.858 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.858 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.858 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.858 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.859 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.117 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.117 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.117 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:58.117 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.117 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.117 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.117 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.117 08:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.117 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.375 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.375 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.375 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.375 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.375 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.375 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.375 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:58.375 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.375 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.375 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.375 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.375 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.640 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.640 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.640 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.640 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.640 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.641 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.907 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.907 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:58.907 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.907 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.907 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.907 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.907 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.907 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.907 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.907 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.907 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.907 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.907 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.908 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.908 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.908 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.908 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.908 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.908 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.908 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.908 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.908 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.908 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.908 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.908 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.908 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.908 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.908 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.908 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.908 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.908 08:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:59.166 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.166 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.166 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.166 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.166 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.166 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.166 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.166 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.425 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.683 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:59.940 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.940 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.940 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.940 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.940 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.940 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.940 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.940 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.940 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.940 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.940 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:00.198 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.198 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.198 08:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:00.198 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:00.456 rmmod nvme_tcp 00:14:00.456 rmmod nvme_fabrics 00:14:00.456 rmmod nvme_keyring 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 208608 ']' 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 208608 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 208608 ']' 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 208608 00:14:00.456 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:14:00.714 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:00.714 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 208608 00:14:00.714 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:00.714 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:00.714 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 208608' 00:14:00.714 killing process with pid 208608 00:14:00.714 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 208608 00:14:00.714 [2024-05-15 08:25:47.523256] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:00.714 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 208608 00:14:00.973 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:00.973 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:00.973 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:00.973 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:00.973 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:00.973 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.973 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.973 08:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.878 08:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:02.878 00:14:02.878 real 0m46.203s 00:14:02.878 user 3m18.664s 00:14:02.878 sys 0m15.799s 00:14:02.878 08:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:02.878 08:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.878 ************************************ 00:14:02.878 END TEST nvmf_ns_hotplug_stress 00:14:02.878 ************************************ 00:14:02.878 08:25:49 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:02.878 08:25:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:02.878 08:25:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:02.878 08:25:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:02.878 ************************************ 00:14:02.878 START TEST nvmf_connect_stress 00:14:02.878 ************************************ 00:14:02.878 08:25:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:03.137 * Looking for test storage... 00:14:03.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:03.137 08:25:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:03.137 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:03.138 08:25:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:08.408 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:08.408 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:08.408 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:08.409 Found net devices under 0000:86:00.0: cvl_0_0 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:08.409 Found net devices under 0000:86:00.1: cvl_0_1 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:08.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:14:08.409 00:14:08.409 --- 10.0.0.2 ping statistics --- 00:14:08.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.409 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:08.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:14:08.409 00:14:08.409 --- 10.0.0.1 ping statistics --- 00:14:08.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.409 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=218870 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 218870 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 218870 ']' 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:08.409 08:25:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.409 [2024-05-15 08:25:54.667326] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:14:08.409 [2024-05-15 08:25:54.667367] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.409 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.409 [2024-05-15 08:25:54.723464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:08.409 [2024-05-15 08:25:54.802111] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.409 [2024-05-15 08:25:54.802143] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.409 [2024-05-15 08:25:54.802151] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.409 [2024-05-15 08:25:54.802157] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.409 [2024-05-15 08:25:54.802162] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.409 [2024-05-15 08:25:54.802270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.409 [2024-05-15 08:25:54.802290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:08.409 [2024-05-15 08:25:54.802291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.699 [2024-05-15 08:25:55.511273] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.699 [2024-05-15 08:25:55.527252] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:08.699 [2024-05-15 08:25:55.543264] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.699 NULL1 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=218942 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.699 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.956 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.956 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:08.956 08:25:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.956 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.956 08:25:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.522 08:25:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.522 08:25:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:09.522 08:25:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.522 08:25:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.522 08:25:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.783 08:25:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.783 08:25:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:09.783 08:25:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.783 08:25:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.783 08:25:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.043 08:25:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.043 08:25:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:10.043 08:25:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.043 08:25:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.043 08:25:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.301 08:25:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.301 08:25:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:10.301 08:25:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.301 08:25:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.301 08:25:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.560 08:25:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.560 08:25:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:10.560 08:25:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.560 08:25:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.560 08:25:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.127 08:25:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.127 08:25:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:11.127 08:25:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.127 08:25:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.127 08:25:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.384 08:25:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.384 08:25:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:11.384 08:25:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.384 08:25:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.384 08:25:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.643 08:25:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.643 08:25:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:11.643 08:25:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.643 08:25:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.643 08:25:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.901 08:25:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.901 08:25:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:11.901 08:25:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.901 08:25:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.901 08:25:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.159 08:25:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.159 08:25:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:12.159 08:25:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.159 08:25:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.159 08:25:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.725 08:25:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.725 08:25:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:12.725 08:25:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.725 08:25:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.725 08:25:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.987 08:25:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.987 08:25:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:12.987 08:25:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.987 08:25:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.987 08:25:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.245 08:26:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.245 08:26:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:13.245 08:26:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.245 08:26:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.245 08:26:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.504 08:26:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.504 08:26:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:13.504 08:26:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.504 08:26:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.504 08:26:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.068 08:26:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.068 08:26:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:14.068 08:26:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.068 08:26:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.068 08:26:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.326 08:26:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.326 08:26:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:14.326 08:26:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.326 08:26:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.326 08:26:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.585 08:26:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.585 08:26:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:14.585 08:26:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.585 08:26:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.585 08:26:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.844 08:26:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.844 08:26:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:14.844 08:26:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.844 08:26:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.844 08:26:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.102 08:26:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.102 08:26:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:15.102 08:26:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.102 08:26:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.102 08:26:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.666 08:26:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.666 08:26:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:15.666 08:26:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.666 08:26:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.666 08:26:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.923 08:26:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.923 08:26:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:15.923 08:26:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.923 08:26:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.923 08:26:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.181 08:26:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.181 08:26:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:16.181 08:26:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.181 08:26:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.181 08:26:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.439 08:26:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.439 08:26:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:16.439 08:26:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.439 08:26:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.439 08:26:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.697 08:26:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.697 08:26:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:16.697 08:26:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.697 08:26:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.697 08:26:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.263 08:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.263 08:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:17.263 08:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.263 08:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.263 08:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.521 08:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.521 08:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:17.521 08:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.521 08:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.521 08:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.780 08:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.780 08:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:17.780 08:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.780 08:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.780 08:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.037 08:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.037 08:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:18.037 08:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.038 08:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.038 08:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.296 08:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.296 08:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:18.296 08:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.296 08:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.296 08:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.862 08:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.862 08:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:18.862 08:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.862 08:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.862 08:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.862 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:19.121 08:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.121 08:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 218942 00:14:19.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (218942) - No such process 00:14:19.121 08:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 218942 00:14:19.121 08:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:19.121 08:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:19.121 08:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:19.121 08:26:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:19.121 08:26:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:19.121 08:26:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:19.121 08:26:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:19.121 08:26:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:19.121 08:26:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:19.121 rmmod nvme_tcp 00:14:19.121 rmmod nvme_fabrics 00:14:19.121 rmmod nvme_keyring 00:14:19.121 08:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:19.121 08:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:19.121 08:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:19.121 08:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 218870 ']' 00:14:19.121 08:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 218870 00:14:19.121 08:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 218870 ']' 00:14:19.121 08:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 218870 00:14:19.121 08:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:14:19.121 08:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:19.121 08:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 218870 00:14:19.121 08:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:19.121 08:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:19.121 08:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 218870' 00:14:19.121 killing process with pid 218870 00:14:19.121 08:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 218870 00:14:19.121 [2024-05-15 08:26:06.086230] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:19.121 08:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 218870 00:14:19.381 08:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:19.381 08:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:19.381 08:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:19.381 08:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.381 08:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:19.381 08:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.381 08:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.381 08:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.913 08:26:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:21.913 00:14:21.913 real 0m18.484s 00:14:21.913 user 0m43.541s 00:14:21.913 sys 0m5.485s 00:14:21.913 08:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:21.913 08:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.913 ************************************ 00:14:21.913 END TEST nvmf_connect_stress 00:14:21.913 ************************************ 00:14:21.913 08:26:08 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:21.913 08:26:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:21.913 08:26:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:21.913 08:26:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:21.913 ************************************ 00:14:21.913 START TEST nvmf_fused_ordering 00:14:21.913 ************************************ 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:21.913 * Looking for test storage... 00:14:21.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:21.913 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:21.914 08:26:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:21.914 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:21.914 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.914 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:21.914 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:21.914 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:21.914 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.914 08:26:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.914 08:26:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.914 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:21.914 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:21.914 08:26:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:21.914 08:26:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:27.175 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:27.175 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:27.175 Found net devices under 0000:86:00.0: cvl_0_0 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:27.175 Found net devices under 0000:86:00.1: cvl_0_1 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.175 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.176 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:27.176 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.176 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.176 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:27.176 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:27.176 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.176 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.176 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.176 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.176 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:27.176 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.176 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.176 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.176 08:26:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:27.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:14:27.176 00:14:27.176 --- 10.0.0.2 ping statistics --- 00:14:27.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.176 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:14:27.176 00:14:27.176 --- 10.0.0.1 ping statistics --- 00:14:27.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.176 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=224301 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 224301 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 224301 ']' 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:27.176 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.176 [2024-05-15 08:26:14.107031] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:14:27.176 [2024-05-15 08:26:14.107069] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.176 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.176 [2024-05-15 08:26:14.162763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.433 [2024-05-15 08:26:14.243285] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.433 [2024-05-15 08:26:14.243323] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.433 [2024-05-15 08:26:14.243330] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.433 [2024-05-15 08:26:14.243336] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.433 [2024-05-15 08:26:14.243340] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.433 [2024-05-15 08:26:14.243380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.998 [2024-05-15 08:26:14.943032] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.998 [2024-05-15 08:26:14.959020] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:27.998 [2024-05-15 08:26:14.959235] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.998 NULL1 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.998 08:26:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:27.998 [2024-05-15 08:26:15.001130] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:14:27.998 [2024-05-15 08:26:15.001159] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224348 ] 00:14:28.256 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.514 Attached to nqn.2016-06.io.spdk:cnode1 00:14:28.514 Namespace ID: 1 size: 1GB 00:14:28.514 fused_ordering(0) 00:14:28.514 fused_ordering(1) 00:14:28.514 fused_ordering(2) 00:14:28.514 fused_ordering(3) 00:14:28.514 fused_ordering(4) 00:14:28.514 fused_ordering(5) 00:14:28.514 fused_ordering(6) 00:14:28.514 fused_ordering(7) 00:14:28.514 fused_ordering(8) 00:14:28.514 fused_ordering(9) 00:14:28.514 fused_ordering(10) 00:14:28.514 fused_ordering(11) 00:14:28.514 fused_ordering(12) 00:14:28.514 fused_ordering(13) 00:14:28.514 fused_ordering(14) 00:14:28.514 fused_ordering(15) 00:14:28.514 fused_ordering(16) 00:14:28.514 fused_ordering(17) 00:14:28.514 fused_ordering(18) 00:14:28.514 fused_ordering(19) 00:14:28.514 fused_ordering(20) 00:14:28.514 fused_ordering(21) 00:14:28.514 fused_ordering(22) 00:14:28.514 fused_ordering(23) 00:14:28.514 fused_ordering(24) 00:14:28.514 fused_ordering(25) 00:14:28.514 fused_ordering(26) 00:14:28.514 fused_ordering(27) 00:14:28.514 fused_ordering(28) 00:14:28.514 fused_ordering(29) 00:14:28.514 fused_ordering(30) 00:14:28.514 fused_ordering(31) 00:14:28.514 fused_ordering(32) 00:14:28.514 fused_ordering(33) 00:14:28.514 fused_ordering(34) 00:14:28.514 fused_ordering(35) 00:14:28.514 fused_ordering(36) 00:14:28.514 fused_ordering(37) 00:14:28.514 fused_ordering(38) 00:14:28.514 fused_ordering(39) 00:14:28.514 fused_ordering(40) 00:14:28.514 fused_ordering(41) 00:14:28.514 fused_ordering(42) 00:14:28.514 fused_ordering(43) 00:14:28.514 fused_ordering(44) 00:14:28.514 fused_ordering(45) 00:14:28.514 fused_ordering(46) 00:14:28.514 fused_ordering(47) 00:14:28.514 fused_ordering(48) 00:14:28.514 fused_ordering(49) 00:14:28.514 fused_ordering(50) 00:14:28.514 fused_ordering(51) 00:14:28.514 fused_ordering(52) 00:14:28.514 fused_ordering(53) 00:14:28.514 fused_ordering(54) 00:14:28.514 fused_ordering(55) 00:14:28.514 fused_ordering(56) 00:14:28.514 fused_ordering(57) 00:14:28.514 fused_ordering(58) 00:14:28.514 fused_ordering(59) 00:14:28.514 fused_ordering(60) 00:14:28.514 fused_ordering(61) 00:14:28.514 fused_ordering(62) 00:14:28.514 fused_ordering(63) 00:14:28.514 fused_ordering(64) 00:14:28.514 fused_ordering(65) 00:14:28.514 fused_ordering(66) 00:14:28.514 fused_ordering(67) 00:14:28.514 fused_ordering(68) 00:14:28.514 fused_ordering(69) 00:14:28.514 fused_ordering(70) 00:14:28.514 fused_ordering(71) 00:14:28.514 fused_ordering(72) 00:14:28.514 fused_ordering(73) 00:14:28.514 fused_ordering(74) 00:14:28.514 fused_ordering(75) 00:14:28.514 fused_ordering(76) 00:14:28.514 fused_ordering(77) 00:14:28.514 fused_ordering(78) 00:14:28.514 fused_ordering(79) 00:14:28.514 fused_ordering(80) 00:14:28.514 fused_ordering(81) 00:14:28.514 fused_ordering(82) 00:14:28.514 fused_ordering(83) 00:14:28.514 fused_ordering(84) 00:14:28.514 fused_ordering(85) 00:14:28.514 fused_ordering(86) 00:14:28.514 fused_ordering(87) 00:14:28.514 fused_ordering(88) 00:14:28.514 fused_ordering(89) 00:14:28.514 fused_ordering(90) 00:14:28.514 fused_ordering(91) 00:14:28.514 fused_ordering(92) 00:14:28.514 fused_ordering(93) 00:14:28.514 fused_ordering(94) 00:14:28.514 fused_ordering(95) 00:14:28.514 fused_ordering(96) 00:14:28.514 fused_ordering(97) 00:14:28.514 fused_ordering(98) 00:14:28.514 fused_ordering(99) 00:14:28.514 fused_ordering(100) 00:14:28.514 fused_ordering(101) 00:14:28.514 fused_ordering(102) 00:14:28.514 fused_ordering(103) 00:14:28.514 fused_ordering(104) 00:14:28.514 fused_ordering(105) 00:14:28.514 fused_ordering(106) 00:14:28.514 fused_ordering(107) 00:14:28.514 fused_ordering(108) 00:14:28.514 fused_ordering(109) 00:14:28.514 fused_ordering(110) 00:14:28.514 fused_ordering(111) 00:14:28.514 fused_ordering(112) 00:14:28.514 fused_ordering(113) 00:14:28.514 fused_ordering(114) 00:14:28.514 fused_ordering(115) 00:14:28.514 fused_ordering(116) 00:14:28.514 fused_ordering(117) 00:14:28.514 fused_ordering(118) 00:14:28.514 fused_ordering(119) 00:14:28.514 fused_ordering(120) 00:14:28.514 fused_ordering(121) 00:14:28.514 fused_ordering(122) 00:14:28.514 fused_ordering(123) 00:14:28.514 fused_ordering(124) 00:14:28.514 fused_ordering(125) 00:14:28.514 fused_ordering(126) 00:14:28.514 fused_ordering(127) 00:14:28.514 fused_ordering(128) 00:14:28.514 fused_ordering(129) 00:14:28.514 fused_ordering(130) 00:14:28.514 fused_ordering(131) 00:14:28.514 fused_ordering(132) 00:14:28.514 fused_ordering(133) 00:14:28.514 fused_ordering(134) 00:14:28.514 fused_ordering(135) 00:14:28.514 fused_ordering(136) 00:14:28.514 fused_ordering(137) 00:14:28.514 fused_ordering(138) 00:14:28.514 fused_ordering(139) 00:14:28.514 fused_ordering(140) 00:14:28.514 fused_ordering(141) 00:14:28.514 fused_ordering(142) 00:14:28.514 fused_ordering(143) 00:14:28.514 fused_ordering(144) 00:14:28.514 fused_ordering(145) 00:14:28.514 fused_ordering(146) 00:14:28.514 fused_ordering(147) 00:14:28.514 fused_ordering(148) 00:14:28.514 fused_ordering(149) 00:14:28.514 fused_ordering(150) 00:14:28.514 fused_ordering(151) 00:14:28.514 fused_ordering(152) 00:14:28.514 fused_ordering(153) 00:14:28.514 fused_ordering(154) 00:14:28.514 fused_ordering(155) 00:14:28.514 fused_ordering(156) 00:14:28.514 fused_ordering(157) 00:14:28.514 fused_ordering(158) 00:14:28.514 fused_ordering(159) 00:14:28.514 fused_ordering(160) 00:14:28.514 fused_ordering(161) 00:14:28.514 fused_ordering(162) 00:14:28.514 fused_ordering(163) 00:14:28.514 fused_ordering(164) 00:14:28.514 fused_ordering(165) 00:14:28.514 fused_ordering(166) 00:14:28.514 fused_ordering(167) 00:14:28.514 fused_ordering(168) 00:14:28.514 fused_ordering(169) 00:14:28.514 fused_ordering(170) 00:14:28.514 fused_ordering(171) 00:14:28.514 fused_ordering(172) 00:14:28.514 fused_ordering(173) 00:14:28.514 fused_ordering(174) 00:14:28.514 fused_ordering(175) 00:14:28.514 fused_ordering(176) 00:14:28.514 fused_ordering(177) 00:14:28.514 fused_ordering(178) 00:14:28.514 fused_ordering(179) 00:14:28.514 fused_ordering(180) 00:14:28.514 fused_ordering(181) 00:14:28.515 fused_ordering(182) 00:14:28.515 fused_ordering(183) 00:14:28.515 fused_ordering(184) 00:14:28.515 fused_ordering(185) 00:14:28.515 fused_ordering(186) 00:14:28.515 fused_ordering(187) 00:14:28.515 fused_ordering(188) 00:14:28.515 fused_ordering(189) 00:14:28.515 fused_ordering(190) 00:14:28.515 fused_ordering(191) 00:14:28.515 fused_ordering(192) 00:14:28.515 fused_ordering(193) 00:14:28.515 fused_ordering(194) 00:14:28.515 fused_ordering(195) 00:14:28.515 fused_ordering(196) 00:14:28.515 fused_ordering(197) 00:14:28.515 fused_ordering(198) 00:14:28.515 fused_ordering(199) 00:14:28.515 fused_ordering(200) 00:14:28.515 fused_ordering(201) 00:14:28.515 fused_ordering(202) 00:14:28.515 fused_ordering(203) 00:14:28.515 fused_ordering(204) 00:14:28.515 fused_ordering(205) 00:14:28.772 fused_ordering(206) 00:14:28.772 fused_ordering(207) 00:14:28.772 fused_ordering(208) 00:14:28.772 fused_ordering(209) 00:14:28.772 fused_ordering(210) 00:14:28.772 fused_ordering(211) 00:14:28.772 fused_ordering(212) 00:14:28.772 fused_ordering(213) 00:14:28.772 fused_ordering(214) 00:14:28.772 fused_ordering(215) 00:14:28.772 fused_ordering(216) 00:14:28.772 fused_ordering(217) 00:14:28.772 fused_ordering(218) 00:14:28.772 fused_ordering(219) 00:14:28.772 fused_ordering(220) 00:14:28.772 fused_ordering(221) 00:14:28.772 fused_ordering(222) 00:14:28.772 fused_ordering(223) 00:14:28.772 fused_ordering(224) 00:14:28.772 fused_ordering(225) 00:14:28.772 fused_ordering(226) 00:14:28.772 fused_ordering(227) 00:14:28.772 fused_ordering(228) 00:14:28.772 fused_ordering(229) 00:14:28.772 fused_ordering(230) 00:14:28.772 fused_ordering(231) 00:14:28.772 fused_ordering(232) 00:14:28.772 fused_ordering(233) 00:14:28.772 fused_ordering(234) 00:14:28.772 fused_ordering(235) 00:14:28.772 fused_ordering(236) 00:14:28.772 fused_ordering(237) 00:14:28.772 fused_ordering(238) 00:14:28.772 fused_ordering(239) 00:14:28.772 fused_ordering(240) 00:14:28.772 fused_ordering(241) 00:14:28.772 fused_ordering(242) 00:14:28.772 fused_ordering(243) 00:14:28.772 fused_ordering(244) 00:14:28.772 fused_ordering(245) 00:14:28.772 fused_ordering(246) 00:14:28.772 fused_ordering(247) 00:14:28.772 fused_ordering(248) 00:14:28.772 fused_ordering(249) 00:14:28.773 fused_ordering(250) 00:14:28.773 fused_ordering(251) 00:14:28.773 fused_ordering(252) 00:14:28.773 fused_ordering(253) 00:14:28.773 fused_ordering(254) 00:14:28.773 fused_ordering(255) 00:14:28.773 fused_ordering(256) 00:14:28.773 fused_ordering(257) 00:14:28.773 fused_ordering(258) 00:14:28.773 fused_ordering(259) 00:14:28.773 fused_ordering(260) 00:14:28.773 fused_ordering(261) 00:14:28.773 fused_ordering(262) 00:14:28.773 fused_ordering(263) 00:14:28.773 fused_ordering(264) 00:14:28.773 fused_ordering(265) 00:14:28.773 fused_ordering(266) 00:14:28.773 fused_ordering(267) 00:14:28.773 fused_ordering(268) 00:14:28.773 fused_ordering(269) 00:14:28.773 fused_ordering(270) 00:14:28.773 fused_ordering(271) 00:14:28.773 fused_ordering(272) 00:14:28.773 fused_ordering(273) 00:14:28.773 fused_ordering(274) 00:14:28.773 fused_ordering(275) 00:14:28.773 fused_ordering(276) 00:14:28.773 fused_ordering(277) 00:14:28.773 fused_ordering(278) 00:14:28.773 fused_ordering(279) 00:14:28.773 fused_ordering(280) 00:14:28.773 fused_ordering(281) 00:14:28.773 fused_ordering(282) 00:14:28.773 fused_ordering(283) 00:14:28.773 fused_ordering(284) 00:14:28.773 fused_ordering(285) 00:14:28.773 fused_ordering(286) 00:14:28.773 fused_ordering(287) 00:14:28.773 fused_ordering(288) 00:14:28.773 fused_ordering(289) 00:14:28.773 fused_ordering(290) 00:14:28.773 fused_ordering(291) 00:14:28.773 fused_ordering(292) 00:14:28.773 fused_ordering(293) 00:14:28.773 fused_ordering(294) 00:14:28.773 fused_ordering(295) 00:14:28.773 fused_ordering(296) 00:14:28.773 fused_ordering(297) 00:14:28.773 fused_ordering(298) 00:14:28.773 fused_ordering(299) 00:14:28.773 fused_ordering(300) 00:14:28.773 fused_ordering(301) 00:14:28.773 fused_ordering(302) 00:14:28.773 fused_ordering(303) 00:14:28.773 fused_ordering(304) 00:14:28.773 fused_ordering(305) 00:14:28.773 fused_ordering(306) 00:14:28.773 fused_ordering(307) 00:14:28.773 fused_ordering(308) 00:14:28.773 fused_ordering(309) 00:14:28.773 fused_ordering(310) 00:14:28.773 fused_ordering(311) 00:14:28.773 fused_ordering(312) 00:14:28.773 fused_ordering(313) 00:14:28.773 fused_ordering(314) 00:14:28.773 fused_ordering(315) 00:14:28.773 fused_ordering(316) 00:14:28.773 fused_ordering(317) 00:14:28.773 fused_ordering(318) 00:14:28.773 fused_ordering(319) 00:14:28.773 fused_ordering(320) 00:14:28.773 fused_ordering(321) 00:14:28.773 fused_ordering(322) 00:14:28.773 fused_ordering(323) 00:14:28.773 fused_ordering(324) 00:14:28.773 fused_ordering(325) 00:14:28.773 fused_ordering(326) 00:14:28.773 fused_ordering(327) 00:14:28.773 fused_ordering(328) 00:14:28.773 fused_ordering(329) 00:14:28.773 fused_ordering(330) 00:14:28.773 fused_ordering(331) 00:14:28.773 fused_ordering(332) 00:14:28.773 fused_ordering(333) 00:14:28.773 fused_ordering(334) 00:14:28.773 fused_ordering(335) 00:14:28.773 fused_ordering(336) 00:14:28.773 fused_ordering(337) 00:14:28.773 fused_ordering(338) 00:14:28.773 fused_ordering(339) 00:14:28.773 fused_ordering(340) 00:14:28.773 fused_ordering(341) 00:14:28.773 fused_ordering(342) 00:14:28.773 fused_ordering(343) 00:14:28.773 fused_ordering(344) 00:14:28.773 fused_ordering(345) 00:14:28.773 fused_ordering(346) 00:14:28.773 fused_ordering(347) 00:14:28.773 fused_ordering(348) 00:14:28.773 fused_ordering(349) 00:14:28.773 fused_ordering(350) 00:14:28.773 fused_ordering(351) 00:14:28.773 fused_ordering(352) 00:14:28.773 fused_ordering(353) 00:14:28.773 fused_ordering(354) 00:14:28.773 fused_ordering(355) 00:14:28.773 fused_ordering(356) 00:14:28.773 fused_ordering(357) 00:14:28.773 fused_ordering(358) 00:14:28.773 fused_ordering(359) 00:14:28.773 fused_ordering(360) 00:14:28.773 fused_ordering(361) 00:14:28.773 fused_ordering(362) 00:14:28.773 fused_ordering(363) 00:14:28.773 fused_ordering(364) 00:14:28.773 fused_ordering(365) 00:14:28.773 fused_ordering(366) 00:14:28.773 fused_ordering(367) 00:14:28.773 fused_ordering(368) 00:14:28.773 fused_ordering(369) 00:14:28.773 fused_ordering(370) 00:14:28.773 fused_ordering(371) 00:14:28.773 fused_ordering(372) 00:14:28.773 fused_ordering(373) 00:14:28.773 fused_ordering(374) 00:14:28.773 fused_ordering(375) 00:14:28.773 fused_ordering(376) 00:14:28.773 fused_ordering(377) 00:14:28.773 fused_ordering(378) 00:14:28.773 fused_ordering(379) 00:14:28.773 fused_ordering(380) 00:14:28.773 fused_ordering(381) 00:14:28.773 fused_ordering(382) 00:14:28.773 fused_ordering(383) 00:14:28.773 fused_ordering(384) 00:14:28.773 fused_ordering(385) 00:14:28.773 fused_ordering(386) 00:14:28.773 fused_ordering(387) 00:14:28.773 fused_ordering(388) 00:14:28.773 fused_ordering(389) 00:14:28.773 fused_ordering(390) 00:14:28.773 fused_ordering(391) 00:14:28.773 fused_ordering(392) 00:14:28.773 fused_ordering(393) 00:14:28.773 fused_ordering(394) 00:14:28.773 fused_ordering(395) 00:14:28.773 fused_ordering(396) 00:14:28.773 fused_ordering(397) 00:14:28.773 fused_ordering(398) 00:14:28.773 fused_ordering(399) 00:14:28.773 fused_ordering(400) 00:14:28.773 fused_ordering(401) 00:14:28.773 fused_ordering(402) 00:14:28.773 fused_ordering(403) 00:14:28.773 fused_ordering(404) 00:14:28.773 fused_ordering(405) 00:14:28.773 fused_ordering(406) 00:14:28.773 fused_ordering(407) 00:14:28.773 fused_ordering(408) 00:14:28.773 fused_ordering(409) 00:14:28.773 fused_ordering(410) 00:14:29.031 fused_ordering(411) 00:14:29.031 fused_ordering(412) 00:14:29.031 fused_ordering(413) 00:14:29.031 fused_ordering(414) 00:14:29.031 fused_ordering(415) 00:14:29.031 fused_ordering(416) 00:14:29.031 fused_ordering(417) 00:14:29.031 fused_ordering(418) 00:14:29.031 fused_ordering(419) 00:14:29.031 fused_ordering(420) 00:14:29.031 fused_ordering(421) 00:14:29.031 fused_ordering(422) 00:14:29.031 fused_ordering(423) 00:14:29.031 fused_ordering(424) 00:14:29.031 fused_ordering(425) 00:14:29.031 fused_ordering(426) 00:14:29.031 fused_ordering(427) 00:14:29.031 fused_ordering(428) 00:14:29.031 fused_ordering(429) 00:14:29.031 fused_ordering(430) 00:14:29.031 fused_ordering(431) 00:14:29.031 fused_ordering(432) 00:14:29.031 fused_ordering(433) 00:14:29.031 fused_ordering(434) 00:14:29.031 fused_ordering(435) 00:14:29.031 fused_ordering(436) 00:14:29.031 fused_ordering(437) 00:14:29.031 fused_ordering(438) 00:14:29.031 fused_ordering(439) 00:14:29.031 fused_ordering(440) 00:14:29.031 fused_ordering(441) 00:14:29.031 fused_ordering(442) 00:14:29.031 fused_ordering(443) 00:14:29.031 fused_ordering(444) 00:14:29.031 fused_ordering(445) 00:14:29.031 fused_ordering(446) 00:14:29.031 fused_ordering(447) 00:14:29.031 fused_ordering(448) 00:14:29.031 fused_ordering(449) 00:14:29.031 fused_ordering(450) 00:14:29.031 fused_ordering(451) 00:14:29.031 fused_ordering(452) 00:14:29.031 fused_ordering(453) 00:14:29.031 fused_ordering(454) 00:14:29.031 fused_ordering(455) 00:14:29.031 fused_ordering(456) 00:14:29.031 fused_ordering(457) 00:14:29.031 fused_ordering(458) 00:14:29.031 fused_ordering(459) 00:14:29.031 fused_ordering(460) 00:14:29.031 fused_ordering(461) 00:14:29.031 fused_ordering(462) 00:14:29.031 fused_ordering(463) 00:14:29.031 fused_ordering(464) 00:14:29.031 fused_ordering(465) 00:14:29.031 fused_ordering(466) 00:14:29.031 fused_ordering(467) 00:14:29.031 fused_ordering(468) 00:14:29.031 fused_ordering(469) 00:14:29.031 fused_ordering(470) 00:14:29.031 fused_ordering(471) 00:14:29.031 fused_ordering(472) 00:14:29.031 fused_ordering(473) 00:14:29.031 fused_ordering(474) 00:14:29.031 fused_ordering(475) 00:14:29.031 fused_ordering(476) 00:14:29.031 fused_ordering(477) 00:14:29.031 fused_ordering(478) 00:14:29.031 fused_ordering(479) 00:14:29.031 fused_ordering(480) 00:14:29.031 fused_ordering(481) 00:14:29.031 fused_ordering(482) 00:14:29.031 fused_ordering(483) 00:14:29.031 fused_ordering(484) 00:14:29.031 fused_ordering(485) 00:14:29.031 fused_ordering(486) 00:14:29.031 fused_ordering(487) 00:14:29.031 fused_ordering(488) 00:14:29.031 fused_ordering(489) 00:14:29.031 fused_ordering(490) 00:14:29.031 fused_ordering(491) 00:14:29.031 fused_ordering(492) 00:14:29.031 fused_ordering(493) 00:14:29.031 fused_ordering(494) 00:14:29.031 fused_ordering(495) 00:14:29.031 fused_ordering(496) 00:14:29.031 fused_ordering(497) 00:14:29.031 fused_ordering(498) 00:14:29.031 fused_ordering(499) 00:14:29.031 fused_ordering(500) 00:14:29.031 fused_ordering(501) 00:14:29.031 fused_ordering(502) 00:14:29.031 fused_ordering(503) 00:14:29.031 fused_ordering(504) 00:14:29.031 fused_ordering(505) 00:14:29.031 fused_ordering(506) 00:14:29.031 fused_ordering(507) 00:14:29.031 fused_ordering(508) 00:14:29.031 fused_ordering(509) 00:14:29.031 fused_ordering(510) 00:14:29.031 fused_ordering(511) 00:14:29.031 fused_ordering(512) 00:14:29.031 fused_ordering(513) 00:14:29.031 fused_ordering(514) 00:14:29.031 fused_ordering(515) 00:14:29.031 fused_ordering(516) 00:14:29.031 fused_ordering(517) 00:14:29.031 fused_ordering(518) 00:14:29.031 fused_ordering(519) 00:14:29.031 fused_ordering(520) 00:14:29.031 fused_ordering(521) 00:14:29.031 fused_ordering(522) 00:14:29.031 fused_ordering(523) 00:14:29.031 fused_ordering(524) 00:14:29.031 fused_ordering(525) 00:14:29.031 fused_ordering(526) 00:14:29.031 fused_ordering(527) 00:14:29.031 fused_ordering(528) 00:14:29.031 fused_ordering(529) 00:14:29.031 fused_ordering(530) 00:14:29.031 fused_ordering(531) 00:14:29.031 fused_ordering(532) 00:14:29.031 fused_ordering(533) 00:14:29.031 fused_ordering(534) 00:14:29.031 fused_ordering(535) 00:14:29.031 fused_ordering(536) 00:14:29.031 fused_ordering(537) 00:14:29.031 fused_ordering(538) 00:14:29.031 fused_ordering(539) 00:14:29.031 fused_ordering(540) 00:14:29.031 fused_ordering(541) 00:14:29.031 fused_ordering(542) 00:14:29.031 fused_ordering(543) 00:14:29.031 fused_ordering(544) 00:14:29.031 fused_ordering(545) 00:14:29.031 fused_ordering(546) 00:14:29.031 fused_ordering(547) 00:14:29.031 fused_ordering(548) 00:14:29.031 fused_ordering(549) 00:14:29.031 fused_ordering(550) 00:14:29.031 fused_ordering(551) 00:14:29.031 fused_ordering(552) 00:14:29.031 fused_ordering(553) 00:14:29.031 fused_ordering(554) 00:14:29.031 fused_ordering(555) 00:14:29.031 fused_ordering(556) 00:14:29.031 fused_ordering(557) 00:14:29.031 fused_ordering(558) 00:14:29.031 fused_ordering(559) 00:14:29.031 fused_ordering(560) 00:14:29.031 fused_ordering(561) 00:14:29.031 fused_ordering(562) 00:14:29.031 fused_ordering(563) 00:14:29.031 fused_ordering(564) 00:14:29.031 fused_ordering(565) 00:14:29.031 fused_ordering(566) 00:14:29.031 fused_ordering(567) 00:14:29.031 fused_ordering(568) 00:14:29.031 fused_ordering(569) 00:14:29.031 fused_ordering(570) 00:14:29.031 fused_ordering(571) 00:14:29.031 fused_ordering(572) 00:14:29.031 fused_ordering(573) 00:14:29.031 fused_ordering(574) 00:14:29.031 fused_ordering(575) 00:14:29.031 fused_ordering(576) 00:14:29.031 fused_ordering(577) 00:14:29.031 fused_ordering(578) 00:14:29.031 fused_ordering(579) 00:14:29.031 fused_ordering(580) 00:14:29.031 fused_ordering(581) 00:14:29.031 fused_ordering(582) 00:14:29.031 fused_ordering(583) 00:14:29.031 fused_ordering(584) 00:14:29.031 fused_ordering(585) 00:14:29.031 fused_ordering(586) 00:14:29.031 fused_ordering(587) 00:14:29.031 fused_ordering(588) 00:14:29.031 fused_ordering(589) 00:14:29.031 fused_ordering(590) 00:14:29.031 fused_ordering(591) 00:14:29.031 fused_ordering(592) 00:14:29.031 fused_ordering(593) 00:14:29.031 fused_ordering(594) 00:14:29.031 fused_ordering(595) 00:14:29.031 fused_ordering(596) 00:14:29.031 fused_ordering(597) 00:14:29.031 fused_ordering(598) 00:14:29.031 fused_ordering(599) 00:14:29.031 fused_ordering(600) 00:14:29.031 fused_ordering(601) 00:14:29.031 fused_ordering(602) 00:14:29.031 fused_ordering(603) 00:14:29.031 fused_ordering(604) 00:14:29.031 fused_ordering(605) 00:14:29.031 fused_ordering(606) 00:14:29.031 fused_ordering(607) 00:14:29.031 fused_ordering(608) 00:14:29.031 fused_ordering(609) 00:14:29.031 fused_ordering(610) 00:14:29.031 fused_ordering(611) 00:14:29.031 fused_ordering(612) 00:14:29.031 fused_ordering(613) 00:14:29.031 fused_ordering(614) 00:14:29.031 fused_ordering(615) 00:14:29.289 fused_ordering(616) 00:14:29.289 fused_ordering(617) 00:14:29.289 fused_ordering(618) 00:14:29.289 fused_ordering(619) 00:14:29.289 fused_ordering(620) 00:14:29.289 fused_ordering(621) 00:14:29.289 fused_ordering(622) 00:14:29.289 fused_ordering(623) 00:14:29.289 fused_ordering(624) 00:14:29.289 fused_ordering(625) 00:14:29.289 fused_ordering(626) 00:14:29.289 fused_ordering(627) 00:14:29.289 fused_ordering(628) 00:14:29.289 fused_ordering(629) 00:14:29.289 fused_ordering(630) 00:14:29.289 fused_ordering(631) 00:14:29.289 fused_ordering(632) 00:14:29.289 fused_ordering(633) 00:14:29.289 fused_ordering(634) 00:14:29.289 fused_ordering(635) 00:14:29.289 fused_ordering(636) 00:14:29.289 fused_ordering(637) 00:14:29.289 fused_ordering(638) 00:14:29.289 fused_ordering(639) 00:14:29.289 fused_ordering(640) 00:14:29.289 fused_ordering(641) 00:14:29.289 fused_ordering(642) 00:14:29.289 fused_ordering(643) 00:14:29.289 fused_ordering(644) 00:14:29.289 fused_ordering(645) 00:14:29.289 fused_ordering(646) 00:14:29.289 fused_ordering(647) 00:14:29.289 fused_ordering(648) 00:14:29.289 fused_ordering(649) 00:14:29.289 fused_ordering(650) 00:14:29.289 fused_ordering(651) 00:14:29.289 fused_ordering(652) 00:14:29.289 fused_ordering(653) 00:14:29.289 fused_ordering(654) 00:14:29.289 fused_ordering(655) 00:14:29.289 fused_ordering(656) 00:14:29.289 fused_ordering(657) 00:14:29.289 fused_ordering(658) 00:14:29.289 fused_ordering(659) 00:14:29.289 fused_ordering(660) 00:14:29.289 fused_ordering(661) 00:14:29.289 fused_ordering(662) 00:14:29.289 fused_ordering(663) 00:14:29.289 fused_ordering(664) 00:14:29.289 fused_ordering(665) 00:14:29.289 fused_ordering(666) 00:14:29.289 fused_ordering(667) 00:14:29.289 fused_ordering(668) 00:14:29.289 fused_ordering(669) 00:14:29.289 fused_ordering(670) 00:14:29.289 fused_ordering(671) 00:14:29.289 fused_ordering(672) 00:14:29.289 fused_ordering(673) 00:14:29.289 fused_ordering(674) 00:14:29.289 fused_ordering(675) 00:14:29.289 fused_ordering(676) 00:14:29.289 fused_ordering(677) 00:14:29.289 fused_ordering(678) 00:14:29.289 fused_ordering(679) 00:14:29.289 fused_ordering(680) 00:14:29.289 fused_ordering(681) 00:14:29.289 fused_ordering(682) 00:14:29.289 fused_ordering(683) 00:14:29.289 fused_ordering(684) 00:14:29.289 fused_ordering(685) 00:14:29.289 fused_ordering(686) 00:14:29.289 fused_ordering(687) 00:14:29.289 fused_ordering(688) 00:14:29.289 fused_ordering(689) 00:14:29.289 fused_ordering(690) 00:14:29.289 fused_ordering(691) 00:14:29.289 fused_ordering(692) 00:14:29.289 fused_ordering(693) 00:14:29.289 fused_ordering(694) 00:14:29.289 fused_ordering(695) 00:14:29.289 fused_ordering(696) 00:14:29.289 fused_ordering(697) 00:14:29.289 fused_ordering(698) 00:14:29.289 fused_ordering(699) 00:14:29.289 fused_ordering(700) 00:14:29.289 fused_ordering(701) 00:14:29.289 fused_ordering(702) 00:14:29.289 fused_ordering(703) 00:14:29.289 fused_ordering(704) 00:14:29.290 fused_ordering(705) 00:14:29.290 fused_ordering(706) 00:14:29.290 fused_ordering(707) 00:14:29.290 fused_ordering(708) 00:14:29.290 fused_ordering(709) 00:14:29.290 fused_ordering(710) 00:14:29.290 fused_ordering(711) 00:14:29.290 fused_ordering(712) 00:14:29.290 fused_ordering(713) 00:14:29.290 fused_ordering(714) 00:14:29.290 fused_ordering(715) 00:14:29.290 fused_ordering(716) 00:14:29.290 fused_ordering(717) 00:14:29.290 fused_ordering(718) 00:14:29.290 fused_ordering(719) 00:14:29.290 fused_ordering(720) 00:14:29.290 fused_ordering(721) 00:14:29.290 fused_ordering(722) 00:14:29.290 fused_ordering(723) 00:14:29.290 fused_ordering(724) 00:14:29.290 fused_ordering(725) 00:14:29.290 fused_ordering(726) 00:14:29.290 fused_ordering(727) 00:14:29.290 fused_ordering(728) 00:14:29.290 fused_ordering(729) 00:14:29.290 fused_ordering(730) 00:14:29.290 fused_ordering(731) 00:14:29.290 fused_ordering(732) 00:14:29.290 fused_ordering(733) 00:14:29.290 fused_ordering(734) 00:14:29.290 fused_ordering(735) 00:14:29.290 fused_ordering(736) 00:14:29.290 fused_ordering(737) 00:14:29.290 fused_ordering(738) 00:14:29.290 fused_ordering(739) 00:14:29.290 fused_ordering(740) 00:14:29.290 fused_ordering(741) 00:14:29.290 fused_ordering(742) 00:14:29.290 fused_ordering(743) 00:14:29.290 fused_ordering(744) 00:14:29.290 fused_ordering(745) 00:14:29.290 fused_ordering(746) 00:14:29.290 fused_ordering(747) 00:14:29.290 fused_ordering(748) 00:14:29.290 fused_ordering(749) 00:14:29.290 fused_ordering(750) 00:14:29.290 fused_ordering(751) 00:14:29.290 fused_ordering(752) 00:14:29.290 fused_ordering(753) 00:14:29.290 fused_ordering(754) 00:14:29.290 fused_ordering(755) 00:14:29.290 fused_ordering(756) 00:14:29.290 fused_ordering(757) 00:14:29.290 fused_ordering(758) 00:14:29.290 fused_ordering(759) 00:14:29.290 fused_ordering(760) 00:14:29.290 fused_ordering(761) 00:14:29.290 fused_ordering(762) 00:14:29.290 fused_ordering(763) 00:14:29.290 fused_ordering(764) 00:14:29.290 fused_ordering(765) 00:14:29.290 fused_ordering(766) 00:14:29.290 fused_ordering(767) 00:14:29.290 fused_ordering(768) 00:14:29.290 fused_ordering(769) 00:14:29.290 fused_ordering(770) 00:14:29.290 fused_ordering(771) 00:14:29.290 fused_ordering(772) 00:14:29.290 fused_ordering(773) 00:14:29.290 fused_ordering(774) 00:14:29.290 fused_ordering(775) 00:14:29.290 fused_ordering(776) 00:14:29.290 fused_ordering(777) 00:14:29.290 fused_ordering(778) 00:14:29.290 fused_ordering(779) 00:14:29.290 fused_ordering(780) 00:14:29.290 fused_ordering(781) 00:14:29.290 fused_ordering(782) 00:14:29.290 fused_ordering(783) 00:14:29.290 fused_ordering(784) 00:14:29.290 fused_ordering(785) 00:14:29.290 fused_ordering(786) 00:14:29.290 fused_ordering(787) 00:14:29.290 fused_ordering(788) 00:14:29.290 fused_ordering(789) 00:14:29.290 fused_ordering(790) 00:14:29.290 fused_ordering(791) 00:14:29.290 fused_ordering(792) 00:14:29.290 fused_ordering(793) 00:14:29.290 fused_ordering(794) 00:14:29.290 fused_ordering(795) 00:14:29.290 fused_ordering(796) 00:14:29.290 fused_ordering(797) 00:14:29.290 fused_ordering(798) 00:14:29.290 fused_ordering(799) 00:14:29.290 fused_ordering(800) 00:14:29.290 fused_ordering(801) 00:14:29.290 fused_ordering(802) 00:14:29.290 fused_ordering(803) 00:14:29.290 fused_ordering(804) 00:14:29.290 fused_ordering(805) 00:14:29.290 fused_ordering(806) 00:14:29.290 fused_ordering(807) 00:14:29.290 fused_ordering(808) 00:14:29.290 fused_ordering(809) 00:14:29.290 fused_ordering(810) 00:14:29.290 fused_ordering(811) 00:14:29.290 fused_ordering(812) 00:14:29.290 fused_ordering(813) 00:14:29.290 fused_ordering(814) 00:14:29.290 fused_ordering(815) 00:14:29.290 fused_ordering(816) 00:14:29.290 fused_ordering(817) 00:14:29.290 fused_ordering(818) 00:14:29.290 fused_ordering(819) 00:14:29.290 fused_ordering(820) 00:14:29.547 fused_ordering(821) 00:14:29.547 fused_ordering(822) 00:14:29.547 fused_ordering(823) 00:14:29.547 fused_ordering(824) 00:14:29.547 fused_ordering(825) 00:14:29.547 fused_ordering(826) 00:14:29.547 fused_ordering(827) 00:14:29.547 fused_ordering(828) 00:14:29.547 fused_ordering(829) 00:14:29.547 fused_ordering(830) 00:14:29.547 fused_ordering(831) 00:14:29.547 fused_ordering(832) 00:14:29.547 fused_ordering(833) 00:14:29.547 fused_ordering(834) 00:14:29.547 fused_ordering(835) 00:14:29.547 fused_ordering(836) 00:14:29.547 fused_ordering(837) 00:14:29.547 fused_ordering(838) 00:14:29.547 fused_ordering(839) 00:14:29.547 fused_ordering(840) 00:14:29.547 fused_ordering(841) 00:14:29.547 fused_ordering(842) 00:14:29.547 fused_ordering(843) 00:14:29.547 fused_ordering(844) 00:14:29.547 fused_ordering(845) 00:14:29.547 fused_ordering(846) 00:14:29.547 fused_ordering(847) 00:14:29.547 fused_ordering(848) 00:14:29.547 fused_ordering(849) 00:14:29.547 fused_ordering(850) 00:14:29.547 fused_ordering(851) 00:14:29.547 fused_ordering(852) 00:14:29.547 fused_ordering(853) 00:14:29.548 fused_ordering(854) 00:14:29.548 fused_ordering(855) 00:14:29.548 fused_ordering(856) 00:14:29.548 fused_ordering(857) 00:14:29.548 fused_ordering(858) 00:14:29.548 fused_ordering(859) 00:14:29.548 fused_ordering(860) 00:14:29.548 fused_ordering(861) 00:14:29.548 fused_ordering(862) 00:14:29.548 fused_ordering(863) 00:14:29.548 fused_ordering(864) 00:14:29.548 fused_ordering(865) 00:14:29.548 fused_ordering(866) 00:14:29.548 fused_ordering(867) 00:14:29.548 fused_ordering(868) 00:14:29.548 fused_ordering(869) 00:14:29.548 fused_ordering(870) 00:14:29.548 fused_ordering(871) 00:14:29.548 fused_ordering(872) 00:14:29.548 fused_ordering(873) 00:14:29.548 fused_ordering(874) 00:14:29.548 fused_ordering(875) 00:14:29.548 fused_ordering(876) 00:14:29.548 fused_ordering(877) 00:14:29.548 fused_ordering(878) 00:14:29.548 fused_ordering(879) 00:14:29.548 fused_ordering(880) 00:14:29.548 fused_ordering(881) 00:14:29.548 fused_ordering(882) 00:14:29.548 fused_ordering(883) 00:14:29.548 fused_ordering(884) 00:14:29.548 fused_ordering(885) 00:14:29.548 fused_ordering(886) 00:14:29.548 fused_ordering(887) 00:14:29.548 fused_ordering(888) 00:14:29.548 fused_ordering(889) 00:14:29.548 fused_ordering(890) 00:14:29.548 fused_ordering(891) 00:14:29.548 fused_ordering(892) 00:14:29.548 fused_ordering(893) 00:14:29.548 fused_ordering(894) 00:14:29.548 fused_ordering(895) 00:14:29.548 fused_ordering(896) 00:14:29.548 fused_ordering(897) 00:14:29.548 fused_ordering(898) 00:14:29.548 fused_ordering(899) 00:14:29.548 fused_ordering(900) 00:14:29.548 fused_ordering(901) 00:14:29.548 fused_ordering(902) 00:14:29.548 fused_ordering(903) 00:14:29.548 fused_ordering(904) 00:14:29.548 fused_ordering(905) 00:14:29.548 fused_ordering(906) 00:14:29.548 fused_ordering(907) 00:14:29.548 fused_ordering(908) 00:14:29.548 fused_ordering(909) 00:14:29.548 fused_ordering(910) 00:14:29.548 fused_ordering(911) 00:14:29.548 fused_ordering(912) 00:14:29.548 fused_ordering(913) 00:14:29.548 fused_ordering(914) 00:14:29.548 fused_ordering(915) 00:14:29.548 fused_ordering(916) 00:14:29.548 fused_ordering(917) 00:14:29.548 fused_ordering(918) 00:14:29.548 fused_ordering(919) 00:14:29.548 fused_ordering(920) 00:14:29.548 fused_ordering(921) 00:14:29.548 fused_ordering(922) 00:14:29.548 fused_ordering(923) 00:14:29.548 fused_ordering(924) 00:14:29.548 fused_ordering(925) 00:14:29.548 fused_ordering(926) 00:14:29.548 fused_ordering(927) 00:14:29.548 fused_ordering(928) 00:14:29.548 fused_ordering(929) 00:14:29.548 fused_ordering(930) 00:14:29.548 fused_ordering(931) 00:14:29.548 fused_ordering(932) 00:14:29.548 fused_ordering(933) 00:14:29.548 fused_ordering(934) 00:14:29.548 fused_ordering(935) 00:14:29.548 fused_ordering(936) 00:14:29.548 fused_ordering(937) 00:14:29.548 fused_ordering(938) 00:14:29.548 fused_ordering(939) 00:14:29.548 fused_ordering(940) 00:14:29.548 fused_ordering(941) 00:14:29.548 fused_ordering(942) 00:14:29.548 fused_ordering(943) 00:14:29.548 fused_ordering(944) 00:14:29.548 fused_ordering(945) 00:14:29.548 fused_ordering(946) 00:14:29.548 fused_ordering(947) 00:14:29.548 fused_ordering(948) 00:14:29.548 fused_ordering(949) 00:14:29.548 fused_ordering(950) 00:14:29.548 fused_ordering(951) 00:14:29.548 fused_ordering(952) 00:14:29.548 fused_ordering(953) 00:14:29.548 fused_ordering(954) 00:14:29.548 fused_ordering(955) 00:14:29.548 fused_ordering(956) 00:14:29.548 fused_ordering(957) 00:14:29.548 fused_ordering(958) 00:14:29.548 fused_ordering(959) 00:14:29.548 fused_ordering(960) 00:14:29.548 fused_ordering(961) 00:14:29.548 fused_ordering(962) 00:14:29.548 fused_ordering(963) 00:14:29.548 fused_ordering(964) 00:14:29.548 fused_ordering(965) 00:14:29.548 fused_ordering(966) 00:14:29.548 fused_ordering(967) 00:14:29.548 fused_ordering(968) 00:14:29.548 fused_ordering(969) 00:14:29.548 fused_ordering(970) 00:14:29.548 fused_ordering(971) 00:14:29.548 fused_ordering(972) 00:14:29.548 fused_ordering(973) 00:14:29.548 fused_ordering(974) 00:14:29.548 fused_ordering(975) 00:14:29.548 fused_ordering(976) 00:14:29.548 fused_ordering(977) 00:14:29.548 fused_ordering(978) 00:14:29.548 fused_ordering(979) 00:14:29.548 fused_ordering(980) 00:14:29.548 fused_ordering(981) 00:14:29.548 fused_ordering(982) 00:14:29.548 fused_ordering(983) 00:14:29.548 fused_ordering(984) 00:14:29.548 fused_ordering(985) 00:14:29.548 fused_ordering(986) 00:14:29.548 fused_ordering(987) 00:14:29.548 fused_ordering(988) 00:14:29.548 fused_ordering(989) 00:14:29.548 fused_ordering(990) 00:14:29.548 fused_ordering(991) 00:14:29.548 fused_ordering(992) 00:14:29.548 fused_ordering(993) 00:14:29.548 fused_ordering(994) 00:14:29.548 fused_ordering(995) 00:14:29.548 fused_ordering(996) 00:14:29.548 fused_ordering(997) 00:14:29.548 fused_ordering(998) 00:14:29.548 fused_ordering(999) 00:14:29.548 fused_ordering(1000) 00:14:29.548 fused_ordering(1001) 00:14:29.548 fused_ordering(1002) 00:14:29.548 fused_ordering(1003) 00:14:29.548 fused_ordering(1004) 00:14:29.548 fused_ordering(1005) 00:14:29.548 fused_ordering(1006) 00:14:29.548 fused_ordering(1007) 00:14:29.548 fused_ordering(1008) 00:14:29.548 fused_ordering(1009) 00:14:29.548 fused_ordering(1010) 00:14:29.548 fused_ordering(1011) 00:14:29.548 fused_ordering(1012) 00:14:29.548 fused_ordering(1013) 00:14:29.548 fused_ordering(1014) 00:14:29.548 fused_ordering(1015) 00:14:29.548 fused_ordering(1016) 00:14:29.548 fused_ordering(1017) 00:14:29.548 fused_ordering(1018) 00:14:29.548 fused_ordering(1019) 00:14:29.548 fused_ordering(1020) 00:14:29.548 fused_ordering(1021) 00:14:29.548 fused_ordering(1022) 00:14:29.548 fused_ordering(1023) 00:14:29.548 08:26:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:29.548 08:26:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:29.548 08:26:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:29.548 08:26:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:29.548 08:26:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:29.548 08:26:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:29.548 08:26:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:29.548 08:26:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:29.548 rmmod nvme_tcp 00:14:29.548 rmmod nvme_fabrics 00:14:29.548 rmmod nvme_keyring 00:14:29.808 08:26:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:29.808 08:26:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:29.808 08:26:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:29.808 08:26:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 224301 ']' 00:14:29.808 08:26:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 224301 00:14:29.808 08:26:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 224301 ']' 00:14:29.808 08:26:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 224301 00:14:29.808 08:26:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:14:29.808 08:26:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:29.808 08:26:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 224301 00:14:29.808 08:26:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:29.808 08:26:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:29.808 08:26:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 224301' 00:14:29.808 killing process with pid 224301 00:14:29.808 08:26:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 224301 00:14:29.808 [2024-05-15 08:26:16.625849] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:29.808 08:26:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 224301 00:14:30.067 08:26:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:30.067 08:26:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:30.067 08:26:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:30.067 08:26:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.067 08:26:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:30.067 08:26:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.067 08:26:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.067 08:26:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.969 08:26:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:31.969 00:14:31.969 real 0m10.462s 00:14:31.969 user 0m5.296s 00:14:31.969 sys 0m5.102s 00:14:31.969 08:26:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:31.969 08:26:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.969 ************************************ 00:14:31.969 END TEST nvmf_fused_ordering 00:14:31.969 ************************************ 00:14:31.969 08:26:18 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:31.969 08:26:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:31.969 08:26:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:31.969 08:26:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:31.969 ************************************ 00:14:31.969 START TEST nvmf_delete_subsystem 00:14:31.969 ************************************ 00:14:31.969 08:26:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:32.228 * Looking for test storage... 00:14:32.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:32.228 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:32.229 08:26:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:37.504 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:37.504 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:37.504 Found net devices under 0000:86:00.0: cvl_0_0 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:37.504 Found net devices under 0000:86:00.1: cvl_0_1 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:37.504 08:26:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.504 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.504 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.504 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:37.504 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.504 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.504 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:37.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:14:37.505 00:14:37.505 --- 10.0.0.2 ping statistics --- 00:14:37.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.505 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:14:37.505 00:14:37.505 --- 10.0.0.1 ping statistics --- 00:14:37.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.505 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=228078 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 228078 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 228078 ']' 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:37.505 08:26:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:37.505 [2024-05-15 08:26:24.293174] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:14:37.505 [2024-05-15 08:26:24.293215] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.505 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.505 [2024-05-15 08:26:24.347635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:37.505 [2024-05-15 08:26:24.418786] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.505 [2024-05-15 08:26:24.418825] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.505 [2024-05-15 08:26:24.418832] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.505 [2024-05-15 08:26:24.418840] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.505 [2024-05-15 08:26:24.418845] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.505 [2024-05-15 08:26:24.418887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.505 [2024-05-15 08:26:24.418890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.074 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:38.074 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:14:38.074 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:38.074 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:38.074 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.334 [2024-05-15 08:26:25.127740] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.334 [2024-05-15 08:26:25.151743] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:38.334 [2024-05-15 08:26:25.151943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.334 NULL1 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.334 Delay0 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=228325 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:38.334 08:26:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:38.334 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.334 [2024-05-15 08:26:25.238560] subsystem.c:1536:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:40.237 08:26:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:40.237 08:26:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.237 08:26:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 [2024-05-15 08:26:27.358752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e76a0 is same with the state(5) to be set 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 starting I/O failed: -6 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Write completed with error (sct=0, sc=8) 00:14:40.497 [2024-05-15 08:26:27.359390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f739c00bfe0 is same with the state(5) to be set 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.497 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Read completed with error (sct=0, sc=8) 00:14:40.498 Write completed with error (sct=0, sc=8) 00:14:41.436 [2024-05-15 08:26:28.333460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e7060 is same with the state(5) to be set 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 [2024-05-15 08:26:28.361527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f739c00c2f0 is same with the state(5) to be set 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 [2024-05-15 08:26:28.363291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e80c0 is same with the state(5) to be set 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 [2024-05-15 08:26:28.363446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e8f10 is same with the state(5) to be set 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.436 Write completed with error (sct=0, sc=8) 00:14:41.436 Read completed with error (sct=0, sc=8) 00:14:41.437 Write completed with error (sct=0, sc=8) 00:14:41.437 Write completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Write completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Write completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Write completed with error (sct=0, sc=8) 00:14:41.437 Write completed with error (sct=0, sc=8) 00:14:41.437 Write completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Write completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Write completed with error (sct=0, sc=8) 00:14:41.437 Write completed with error (sct=0, sc=8) 00:14:41.437 Read completed with error (sct=0, sc=8) 00:14:41.437 Write completed with error (sct=0, sc=8) 00:14:41.437 [2024-05-15 08:26:28.363586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10efc20 is same with the state(5) to be set 00:14:41.437 Initializing NVMe Controllers 00:14:41.437 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:41.437 Controller IO queue size 128, less than required. 00:14:41.437 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:41.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:41.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:41.437 Initialization complete. Launching workers. 00:14:41.437 ======================================================== 00:14:41.437 Latency(us) 00:14:41.437 Device Information : IOPS MiB/s Average min max 00:14:41.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 193.19 0.09 945958.03 495.18 1013484.53 00:14:41.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.93 0.08 867339.97 231.19 1013256.08 00:14:41.437 ======================================================== 00:14:41.437 Total : 351.12 0.17 910596.58 231.19 1013484.53 00:14:41.437 00:14:41.437 [2024-05-15 08:26:28.364131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e7060 (9): Bad file descriptor 00:14:41.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:41.437 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.437 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:41.437 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 228325 00:14:41.437 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:42.005 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:42.005 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 228325 00:14:42.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (228325) - No such process 00:14:42.005 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 228325 00:14:42.005 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:42.005 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 228325 00:14:42.005 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 228325 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:42.006 [2024-05-15 08:26:28.892855] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=228841 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 228841 00:14:42.006 08:26:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:42.006 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.006 [2024-05-15 08:26:28.959558] subsystem.c:1536:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:42.573 08:26:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:42.573 08:26:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 228841 00:14:42.573 08:26:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:43.140 08:26:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:43.140 08:26:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 228841 00:14:43.140 08:26:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:43.399 08:26:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:43.399 08:26:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 228841 00:14:43.399 08:26:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:43.967 08:26:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:43.967 08:26:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 228841 00:14:43.967 08:26:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:44.534 08:26:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:44.534 08:26:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 228841 00:14:44.534 08:26:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:45.102 08:26:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:45.102 08:26:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 228841 00:14:45.102 08:26:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:45.102 Initializing NVMe Controllers 00:14:45.102 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:45.102 Controller IO queue size 128, less than required. 00:14:45.102 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:45.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:45.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:45.102 Initialization complete. Launching workers. 00:14:45.102 ======================================================== 00:14:45.102 Latency(us) 00:14:45.102 Device Information : IOPS MiB/s Average min max 00:14:45.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002835.22 1000148.99 1009229.55 00:14:45.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004798.96 1000308.51 1012765.07 00:14:45.102 ======================================================== 00:14:45.102 Total : 256.00 0.12 1003817.09 1000148.99 1012765.07 00:14:45.102 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 228841 00:14:45.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (228841) - No such process 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 228841 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:45.674 rmmod nvme_tcp 00:14:45.674 rmmod nvme_fabrics 00:14:45.674 rmmod nvme_keyring 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 228078 ']' 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 228078 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 228078 ']' 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 228078 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 228078 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 228078' 00:14:45.674 killing process with pid 228078 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 228078 00:14:45.674 [2024-05-15 08:26:32.542390] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:45.674 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 228078 00:14:45.934 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:45.934 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:45.934 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:45.934 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:45.934 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:45.934 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.934 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.934 08:26:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.843 08:26:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:47.843 00:14:47.843 real 0m15.855s 00:14:47.843 user 0m30.203s 00:14:47.843 sys 0m4.724s 00:14:47.843 08:26:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:47.843 08:26:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:47.843 ************************************ 00:14:47.843 END TEST nvmf_delete_subsystem 00:14:47.843 ************************************ 00:14:47.843 08:26:34 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:47.843 08:26:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:47.843 08:26:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:47.843 08:26:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:48.102 ************************************ 00:14:48.102 START TEST nvmf_ns_masking 00:14:48.102 ************************************ 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:48.102 * Looking for test storage... 00:14:48.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:48.102 08:26:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:48.102 08:26:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=cd61c57c-90f5-4a67-b40e-d8dcd8c4d163 00:14:48.102 08:26:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:48.102 08:26:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:48.102 08:26:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.102 08:26:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:48.102 08:26:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:48.102 08:26:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:48.102 08:26:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.102 08:26:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.102 08:26:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.102 08:26:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:48.102 08:26:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:48.102 08:26:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:48.102 08:26:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:53.376 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:53.376 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:53.376 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:53.376 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:53.376 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:53.376 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:53.376 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:53.376 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:53.376 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:53.376 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:53.377 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:53.377 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:53.377 Found net devices under 0000:86:00.0: cvl_0_0 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:53.377 Found net devices under 0000:86:00.1: cvl_0_1 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:53.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:14:53.377 00:14:53.377 --- 10.0.0.2 ping statistics --- 00:14:53.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.377 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:53.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:14:53.377 00:14:53.377 --- 10.0.0.1 ping statistics --- 00:14:53.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.377 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=233006 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 233006 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 233006 ']' 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:53.377 08:26:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:53.377 [2024-05-15 08:26:40.341010] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:14:53.377 [2024-05-15 08:26:40.341050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.377 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.377 [2024-05-15 08:26:40.398395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.636 [2024-05-15 08:26:40.479521] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.636 [2024-05-15 08:26:40.479556] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.636 [2024-05-15 08:26:40.479564] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.636 [2024-05-15 08:26:40.479570] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.636 [2024-05-15 08:26:40.479575] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.636 [2024-05-15 08:26:40.479611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.636 [2024-05-15 08:26:40.479627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.636 [2024-05-15 08:26:40.479740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.636 [2024-05-15 08:26:40.479742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.203 08:26:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:54.203 08:26:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:54.203 08:26:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:54.203 08:26:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:54.203 08:26:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:54.203 08:26:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.203 08:26:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:54.460 [2024-05-15 08:26:41.350846] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.460 08:26:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:54.460 08:26:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:54.460 08:26:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:54.718 Malloc1 00:14:54.718 08:26:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:54.977 Malloc2 00:14:54.977 08:26:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:54.977 08:26:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:55.236 08:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:55.493 [2024-05-15 08:26:42.283083] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:55.494 [2024-05-15 08:26:42.283324] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.494 08:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:55.494 08:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cd61c57c-90f5-4a67-b40e-d8dcd8c4d163 -a 10.0.0.2 -s 4420 -i 4 00:14:55.494 08:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:55.494 08:26:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:55.494 08:26:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:55.494 08:26:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:55.494 08:26:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:58.021 [ 0]:0x1 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3ec62579cd5945559de668963a6892df 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3ec62579cd5945559de668963a6892df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:58.021 [ 0]:0x1 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3ec62579cd5945559de668963a6892df 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3ec62579cd5945559de668963a6892df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:58.021 [ 1]:0x2 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=09242e1dbe5d41fc8ae5b9d917d0650c 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 09242e1dbe5d41fc8ae5b9d917d0650c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:58.021 08:26:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:58.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.281 08:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.281 08:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:58.538 08:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:58.538 08:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cd61c57c-90f5-4a67-b40e-d8dcd8c4d163 -a 10.0.0.2 -s 4420 -i 4 00:14:58.796 08:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:58.796 08:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:58.796 08:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:58.796 08:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:14:58.796 08:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:14:58.796 08:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:00.691 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:00.948 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:00.948 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:00.948 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:00.948 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.948 08:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:00.948 08:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:00.948 08:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:00.948 08:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:00.948 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:00.948 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:00.948 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:00.948 [ 0]:0x2 00:15:00.948 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:00.948 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:00.948 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=09242e1dbe5d41fc8ae5b9d917d0650c 00:15:00.948 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 09242e1dbe5d41fc8ae5b9d917d0650c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.948 08:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:01.205 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:01.205 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:01.205 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:01.205 [ 0]:0x1 00:15:01.205 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:01.205 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:01.205 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3ec62579cd5945559de668963a6892df 00:15:01.205 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3ec62579cd5945559de668963a6892df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.205 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:01.205 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:01.205 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:01.205 [ 1]:0x2 00:15:01.205 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:01.205 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:01.205 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=09242e1dbe5d41fc8ae5b9d917d0650c 00:15:01.205 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 09242e1dbe5d41fc8ae5b9d917d0650c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.205 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:01.462 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:01.463 [ 0]:0x2 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=09242e1dbe5d41fc8ae5b9d917d0650c 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 09242e1dbe5d41fc8ae5b9d917d0650c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:01.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.463 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:01.720 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:15:01.720 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cd61c57c-90f5-4a67-b40e-d8dcd8c4d163 -a 10.0.0.2 -s 4420 -i 4 00:15:01.977 08:26:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:01.978 08:26:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:01.978 08:26:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:01.978 08:26:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:15:01.978 08:26:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:15:01.978 08:26:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:03.875 08:26:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:03.875 08:26:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:03.875 08:26:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:03.875 08:26:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:15:03.875 08:26:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:03.875 08:26:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:03.875 08:26:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:03.875 08:26:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:04.132 08:26:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:04.132 08:26:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:04.132 08:26:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:04.132 08:26:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.132 08:26:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:04.132 [ 0]:0x1 00:15:04.132 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:04.132 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.132 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3ec62579cd5945559de668963a6892df 00:15:04.132 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3ec62579cd5945559de668963a6892df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.132 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:04.132 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.132 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:04.132 [ 1]:0x2 00:15:04.132 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:04.132 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.390 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=09242e1dbe5d41fc8ae5b9d917d0650c 00:15:04.390 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 09242e1dbe5d41fc8ae5b9d917d0650c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.390 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:04.390 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:04.390 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:04.390 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:04.390 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:04.390 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.390 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:04.390 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.390 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:04.390 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:04.390 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.390 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:04.390 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:04.648 [ 0]:0x2 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=09242e1dbe5d41fc8ae5b9d917d0650c 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 09242e1dbe5d41fc8ae5b9d917d0650c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:04.648 [2024-05-15 08:26:51.625790] nvmf_rpc.c:1776:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:04.648 request: 00:15:04.648 { 00:15:04.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.648 "nsid": 2, 00:15:04.648 "host": "nqn.2016-06.io.spdk:host1", 00:15:04.648 "method": "nvmf_ns_remove_host", 00:15:04.648 "req_id": 1 00:15:04.648 } 00:15:04.648 Got JSON-RPC error response 00:15:04.648 response: 00:15:04.648 { 00:15:04.648 "code": -32602, 00:15:04.648 "message": "Invalid parameters" 00:15:04.648 } 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:04.648 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.905 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:04.905 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.905 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:04.905 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:04.905 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:04.905 08:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:04.905 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:04.905 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.905 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:04.905 [ 0]:0x2 00:15:04.905 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:04.905 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.905 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=09242e1dbe5d41fc8ae5b9d917d0650c 00:15:04.905 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 09242e1dbe5d41fc8ae5b9d917d0650c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.905 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:15:04.905 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:04.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.905 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.162 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:05.162 08:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:05.162 08:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:05.163 08:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:05.163 08:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:05.163 08:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:05.163 08:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:05.163 08:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:05.163 rmmod nvme_tcp 00:15:05.163 rmmod nvme_fabrics 00:15:05.163 rmmod nvme_keyring 00:15:05.163 08:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:05.163 08:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:05.163 08:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:05.163 08:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 233006 ']' 00:15:05.163 08:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 233006 00:15:05.163 08:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 233006 ']' 00:15:05.163 08:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 233006 00:15:05.163 08:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:15:05.163 08:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:05.163 08:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 233006 00:15:05.163 08:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:05.163 08:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:05.163 08:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 233006' 00:15:05.163 killing process with pid 233006 00:15:05.163 08:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 233006 00:15:05.163 [2024-05-15 08:26:52.094047] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:05.163 08:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 233006 00:15:05.420 08:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:05.420 08:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:05.420 08:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:05.420 08:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.420 08:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:05.420 08:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.420 08:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.420 08:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.394 08:26:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:07.394 00:15:07.394 real 0m19.518s 00:15:07.394 user 0m51.452s 00:15:07.394 sys 0m5.399s 00:15:07.394 08:26:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:07.395 08:26:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:07.395 ************************************ 00:15:07.395 END TEST nvmf_ns_masking 00:15:07.395 ************************************ 00:15:07.654 08:26:54 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:07.654 08:26:54 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:07.654 08:26:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:07.654 08:26:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:07.654 08:26:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:07.654 ************************************ 00:15:07.654 START TEST nvmf_nvme_cli 00:15:07.654 ************************************ 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:07.654 * Looking for test storage... 00:15:07.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.654 08:26:54 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:07.655 08:26:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:12.926 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:12.926 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:12.926 Found net devices under 0000:86:00.0: cvl_0_0 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:12.926 Found net devices under 0000:86:00.1: cvl_0_1 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:12.926 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:12.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:15:12.927 00:15:12.927 --- 10.0.0.2 ping statistics --- 00:15:12.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.927 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:15:12.927 00:15:12.927 --- 10.0.0.1 ping statistics --- 00:15:12.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.927 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=238522 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 238522 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 238522 ']' 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:12.927 08:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.927 [2024-05-15 08:26:59.801724] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:15:12.927 [2024-05-15 08:26:59.801763] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.927 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.927 [2024-05-15 08:26:59.857765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.927 [2024-05-15 08:26:59.937909] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.927 [2024-05-15 08:26:59.937944] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.927 [2024-05-15 08:26:59.937951] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.927 [2024-05-15 08:26:59.937958] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.927 [2024-05-15 08:26:59.937963] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.927 [2024-05-15 08:26:59.937998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.927 [2024-05-15 08:26:59.938095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.927 [2024-05-15 08:26:59.938111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.927 [2024-05-15 08:26:59.938112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.861 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.862 [2024-05-15 08:27:00.665221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.862 Malloc0 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.862 Malloc1 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.862 [2024-05-15 08:27:00.746442] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:13.862 [2024-05-15 08:27:00.746681] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:15:13.862 00:15:13.862 Discovery Log Number of Records 2, Generation counter 2 00:15:13.862 =====Discovery Log Entry 0====== 00:15:13.862 trtype: tcp 00:15:13.862 adrfam: ipv4 00:15:13.862 subtype: current discovery subsystem 00:15:13.862 treq: not required 00:15:13.862 portid: 0 00:15:13.862 trsvcid: 4420 00:15:13.862 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:13.862 traddr: 10.0.0.2 00:15:13.862 eflags: explicit discovery connections, duplicate discovery information 00:15:13.862 sectype: none 00:15:13.862 =====Discovery Log Entry 1====== 00:15:13.862 trtype: tcp 00:15:13.862 adrfam: ipv4 00:15:13.862 subtype: nvme subsystem 00:15:13.862 treq: not required 00:15:13.862 portid: 0 00:15:13.862 trsvcid: 4420 00:15:13.862 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:13.862 traddr: 10.0.0.2 00:15:13.862 eflags: none 00:15:13.862 sectype: none 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:13.862 08:27:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:14.120 08:27:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:14.120 08:27:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:14.120 08:27:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:14.120 08:27:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:14.120 08:27:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:14.120 08:27:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:15.055 08:27:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:15.055 08:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:15:15.055 08:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:15.055 08:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:15:15.055 08:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:15:15.055 08:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:17.588 /dev/nvme0n1 ]] 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:17.588 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:17.589 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:17.589 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:17.589 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:17.589 08:27:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:17.589 08:27:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:17.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:17.849 rmmod nvme_tcp 00:15:17.849 rmmod nvme_fabrics 00:15:17.849 rmmod nvme_keyring 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 238522 ']' 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 238522 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 238522 ']' 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 238522 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 238522 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 238522' 00:15:17.849 killing process with pid 238522 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 238522 00:15:17.849 [2024-05-15 08:27:04.758342] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:17.849 08:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 238522 00:15:18.110 08:27:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:18.110 08:27:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:18.110 08:27:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:18.110 08:27:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:18.110 08:27:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:18.110 08:27:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.110 08:27:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.110 08:27:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.658 08:27:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:20.658 00:15:20.658 real 0m12.594s 00:15:20.658 user 0m21.591s 00:15:20.658 sys 0m4.452s 00:15:20.658 08:27:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:20.658 08:27:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:20.658 ************************************ 00:15:20.658 END TEST nvmf_nvme_cli 00:15:20.658 ************************************ 00:15:20.658 08:27:07 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:20.658 08:27:07 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:20.658 08:27:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:20.658 08:27:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:20.658 08:27:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:20.658 ************************************ 00:15:20.658 START TEST nvmf_vfio_user 00:15:20.658 ************************************ 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:20.658 * Looking for test storage... 00:15:20.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=240333 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 240333' 00:15:20.658 Process pid: 240333 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 240333 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 240333 ']' 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:20.658 08:27:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:20.658 [2024-05-15 08:27:07.318023] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:15:20.658 [2024-05-15 08:27:07.318070] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.658 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.658 [2024-05-15 08:27:07.374520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.658 [2024-05-15 08:27:07.448843] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.658 [2024-05-15 08:27:07.448888] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.658 [2024-05-15 08:27:07.448895] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.658 [2024-05-15 08:27:07.448902] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.658 [2024-05-15 08:27:07.448908] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.658 [2024-05-15 08:27:07.448952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.658 [2024-05-15 08:27:07.449048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.658 [2024-05-15 08:27:07.449135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.658 [2024-05-15 08:27:07.449137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.228 08:27:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:21.228 08:27:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:21.228 08:27:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:22.167 08:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:22.425 08:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:22.425 08:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:22.425 08:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:22.425 08:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:22.425 08:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:22.685 Malloc1 00:15:22.685 08:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:22.685 08:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:22.944 08:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:23.204 [2024-05-15 08:27:10.033810] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:23.204 08:27:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:23.204 08:27:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:23.204 08:27:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:23.463 Malloc2 00:15:23.463 08:27:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:23.463 08:27:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:23.722 08:27:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:23.983 08:27:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:23.983 08:27:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:23.983 08:27:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:23.983 08:27:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:23.983 08:27:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:23.984 08:27:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:23.984 [2024-05-15 08:27:10.851933] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:15:23.984 [2024-05-15 08:27:10.851969] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid241038 ] 00:15:23.984 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.984 [2024-05-15 08:27:10.882327] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:23.984 [2024-05-15 08:27:10.884636] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:23.984 [2024-05-15 08:27:10.884654] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3857174000 00:15:23.984 [2024-05-15 08:27:10.885638] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:23.984 [2024-05-15 08:27:10.886644] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:23.984 [2024-05-15 08:27:10.887641] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:23.984 [2024-05-15 08:27:10.888650] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:23.984 [2024-05-15 08:27:10.889651] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:23.984 [2024-05-15 08:27:10.890654] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:23.984 [2024-05-15 08:27:10.891658] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:23.984 [2024-05-15 08:27:10.892654] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:23.984 [2024-05-15 08:27:10.893672] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:23.984 [2024-05-15 08:27:10.893683] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3857169000 00:15:23.984 [2024-05-15 08:27:10.894624] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:23.984 [2024-05-15 08:27:10.907220] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:23.984 [2024-05-15 08:27:10.907240] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:23.984 [2024-05-15 08:27:10.909787] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:23.984 [2024-05-15 08:27:10.909828] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:23.984 [2024-05-15 08:27:10.909903] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:23.984 [2024-05-15 08:27:10.909916] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:23.984 [2024-05-15 08:27:10.909920] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:23.984 [2024-05-15 08:27:10.910781] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:23.984 [2024-05-15 08:27:10.910789] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:23.984 [2024-05-15 08:27:10.910795] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:23.984 [2024-05-15 08:27:10.911780] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:23.984 [2024-05-15 08:27:10.911787] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:23.984 [2024-05-15 08:27:10.911794] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:23.984 [2024-05-15 08:27:10.912783] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:23.984 [2024-05-15 08:27:10.912791] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:23.984 [2024-05-15 08:27:10.913788] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:23.984 [2024-05-15 08:27:10.913795] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:23.984 [2024-05-15 08:27:10.913799] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:23.984 [2024-05-15 08:27:10.913805] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:23.984 [2024-05-15 08:27:10.913910] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:23.984 [2024-05-15 08:27:10.913914] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:23.984 [2024-05-15 08:27:10.913918] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:23.984 [2024-05-15 08:27:10.918169] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:23.984 [2024-05-15 08:27:10.918814] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:23.984 [2024-05-15 08:27:10.919825] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:23.984 [2024-05-15 08:27:10.920825] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:23.984 [2024-05-15 08:27:10.920881] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:23.984 [2024-05-15 08:27:10.921837] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:23.984 [2024-05-15 08:27:10.921844] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:23.984 [2024-05-15 08:27:10.921849] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:23.984 [2024-05-15 08:27:10.921866] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:23.984 [2024-05-15 08:27:10.921875] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:23.984 [2024-05-15 08:27:10.921889] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:23.984 [2024-05-15 08:27:10.921894] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:23.984 [2024-05-15 08:27:10.921907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:23.984 [2024-05-15 08:27:10.921953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:23.984 [2024-05-15 08:27:10.921962] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:23.984 [2024-05-15 08:27:10.921967] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:23.984 [2024-05-15 08:27:10.921970] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:23.984 [2024-05-15 08:27:10.921975] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:23.984 [2024-05-15 08:27:10.921979] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:23.984 [2024-05-15 08:27:10.921983] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:23.984 [2024-05-15 08:27:10.921987] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:23.984 [2024-05-15 08:27:10.921996] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:23.984 [2024-05-15 08:27:10.922009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:23.984 [2024-05-15 08:27:10.922021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:23.984 [2024-05-15 08:27:10.922030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.984 [2024-05-15 08:27:10.922038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.984 [2024-05-15 08:27:10.922045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.984 [2024-05-15 08:27:10.922052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.984 [2024-05-15 08:27:10.922056] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:23.984 [2024-05-15 08:27:10.922063] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:23.984 [2024-05-15 08:27:10.922071] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:23.984 [2024-05-15 08:27:10.922080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:23.984 [2024-05-15 08:27:10.922085] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:23.984 [2024-05-15 08:27:10.922090] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:23.984 [2024-05-15 08:27:10.922097] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:23.984 [2024-05-15 08:27:10.922104] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:23.984 [2024-05-15 08:27:10.922112] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:23.984 [2024-05-15 08:27:10.922125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:23.984 [2024-05-15 08:27:10.922169] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:23.984 [2024-05-15 08:27:10.922176] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:23.985 [2024-05-15 08:27:10.922182] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:23.985 [2024-05-15 08:27:10.922186] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:23.985 [2024-05-15 08:27:10.922192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:23.985 [2024-05-15 08:27:10.922202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:23.985 [2024-05-15 08:27:10.922212] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:23.985 [2024-05-15 08:27:10.922220] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:23.985 [2024-05-15 08:27:10.922227] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:23.985 [2024-05-15 08:27:10.922232] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:23.985 [2024-05-15 08:27:10.922236] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:23.985 [2024-05-15 08:27:10.922242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:23.985 [2024-05-15 08:27:10.922263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:23.985 [2024-05-15 08:27:10.922273] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:23.985 [2024-05-15 08:27:10.922280] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:23.985 [2024-05-15 08:27:10.922286] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:23.985 [2024-05-15 08:27:10.922290] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:23.985 [2024-05-15 08:27:10.922295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:23.985 [2024-05-15 08:27:10.922309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:23.985 [2024-05-15 08:27:10.922318] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:23.985 [2024-05-15 08:27:10.922323] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:23.985 [2024-05-15 08:27:10.922330] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:23.985 [2024-05-15 08:27:10.922336] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:23.985 [2024-05-15 08:27:10.922341] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:23.985 [2024-05-15 08:27:10.922345] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:23.985 [2024-05-15 08:27:10.922349] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:23.985 [2024-05-15 08:27:10.922353] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:23.985 [2024-05-15 08:27:10.922372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:23.985 [2024-05-15 08:27:10.922386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:23.985 [2024-05-15 08:27:10.922396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:23.985 [2024-05-15 08:27:10.922405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:23.985 [2024-05-15 08:27:10.922414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:23.985 [2024-05-15 08:27:10.922425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:23.985 [2024-05-15 08:27:10.922434] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:23.985 [2024-05-15 08:27:10.922444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:23.985 [2024-05-15 08:27:10.922453] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:23.985 [2024-05-15 08:27:10.922457] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:23.985 [2024-05-15 08:27:10.922460] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:23.985 [2024-05-15 08:27:10.922463] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:23.985 [2024-05-15 08:27:10.922468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:23.985 [2024-05-15 08:27:10.922474] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:23.985 [2024-05-15 08:27:10.922478] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:23.985 [2024-05-15 08:27:10.922483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:23.985 [2024-05-15 08:27:10.922489] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:23.985 [2024-05-15 08:27:10.922493] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:23.985 [2024-05-15 08:27:10.922498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:23.985 [2024-05-15 08:27:10.922594] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:23.985 [2024-05-15 08:27:10.922598] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:23.985 [2024-05-15 08:27:10.922603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:23.985 [2024-05-15 08:27:10.922610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:23.985 [2024-05-15 08:27:10.922622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:23.985 [2024-05-15 08:27:10.922630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:23.985 [2024-05-15 08:27:10.922638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:23.985 ===================================================== 00:15:23.985 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:23.985 ===================================================== 00:15:23.985 Controller Capabilities/Features 00:15:23.985 ================================ 00:15:23.985 Vendor ID: 4e58 00:15:23.985 Subsystem Vendor ID: 4e58 00:15:23.985 Serial Number: SPDK1 00:15:23.985 Model Number: SPDK bdev Controller 00:15:23.985 Firmware Version: 24.05 00:15:23.985 Recommended Arb Burst: 6 00:15:23.985 IEEE OUI Identifier: 8d 6b 50 00:15:23.985 Multi-path I/O 00:15:23.985 May have multiple subsystem ports: Yes 00:15:23.985 May have multiple controllers: Yes 00:15:23.985 Associated with SR-IOV VF: No 00:15:23.985 Max Data Transfer Size: 131072 00:15:23.985 Max Number of Namespaces: 32 00:15:23.985 Max Number of I/O Queues: 127 00:15:23.985 NVMe Specification Version (VS): 1.3 00:15:23.985 NVMe Specification Version (Identify): 1.3 00:15:23.985 Maximum Queue Entries: 256 00:15:23.985 Contiguous Queues Required: Yes 00:15:23.985 Arbitration Mechanisms Supported 00:15:23.985 Weighted Round Robin: Not Supported 00:15:23.985 Vendor Specific: Not Supported 00:15:23.985 Reset Timeout: 15000 ms 00:15:23.985 Doorbell Stride: 4 bytes 00:15:23.985 NVM Subsystem Reset: Not Supported 00:15:23.985 Command Sets Supported 00:15:23.985 NVM Command Set: Supported 00:15:23.985 Boot Partition: Not Supported 00:15:23.985 Memory Page Size Minimum: 4096 bytes 00:15:23.985 Memory Page Size Maximum: 4096 bytes 00:15:23.985 Persistent Memory Region: Not Supported 00:15:23.985 Optional Asynchronous Events Supported 00:15:23.985 Namespace Attribute Notices: Supported 00:15:23.985 Firmware Activation Notices: Not Supported 00:15:23.985 ANA Change Notices: Not Supported 00:15:23.985 PLE Aggregate Log Change Notices: Not Supported 00:15:23.985 LBA Status Info Alert Notices: Not Supported 00:15:23.985 EGE Aggregate Log Change Notices: Not Supported 00:15:23.985 Normal NVM Subsystem Shutdown event: Not Supported 00:15:23.985 Zone Descriptor Change Notices: Not Supported 00:15:23.985 Discovery Log Change Notices: Not Supported 00:15:23.985 Controller Attributes 00:15:23.985 128-bit Host Identifier: Supported 00:15:23.985 Non-Operational Permissive Mode: Not Supported 00:15:23.985 NVM Sets: Not Supported 00:15:23.985 Read Recovery Levels: Not Supported 00:15:23.985 Endurance Groups: Not Supported 00:15:23.985 Predictable Latency Mode: Not Supported 00:15:23.985 Traffic Based Keep ALive: Not Supported 00:15:23.985 Namespace Granularity: Not Supported 00:15:23.985 SQ Associations: Not Supported 00:15:23.985 UUID List: Not Supported 00:15:23.985 Multi-Domain Subsystem: Not Supported 00:15:23.985 Fixed Capacity Management: Not Supported 00:15:23.985 Variable Capacity Management: Not Supported 00:15:23.985 Delete Endurance Group: Not Supported 00:15:23.985 Delete NVM Set: Not Supported 00:15:23.985 Extended LBA Formats Supported: Not Supported 00:15:23.985 Flexible Data Placement Supported: Not Supported 00:15:23.985 00:15:23.985 Controller Memory Buffer Support 00:15:23.985 ================================ 00:15:23.985 Supported: No 00:15:23.985 00:15:23.985 Persistent Memory Region Support 00:15:23.985 ================================ 00:15:23.985 Supported: No 00:15:23.985 00:15:23.985 Admin Command Set Attributes 00:15:23.985 ============================ 00:15:23.985 Security Send/Receive: Not Supported 00:15:23.985 Format NVM: Not Supported 00:15:23.985 Firmware Activate/Download: Not Supported 00:15:23.985 Namespace Management: Not Supported 00:15:23.985 Device Self-Test: Not Supported 00:15:23.985 Directives: Not Supported 00:15:23.985 NVMe-MI: Not Supported 00:15:23.985 Virtualization Management: Not Supported 00:15:23.986 Doorbell Buffer Config: Not Supported 00:15:23.986 Get LBA Status Capability: Not Supported 00:15:23.986 Command & Feature Lockdown Capability: Not Supported 00:15:23.986 Abort Command Limit: 4 00:15:23.986 Async Event Request Limit: 4 00:15:23.986 Number of Firmware Slots: N/A 00:15:23.986 Firmware Slot 1 Read-Only: N/A 00:15:23.986 Firmware Activation Without Reset: N/A 00:15:23.986 Multiple Update Detection Support: N/A 00:15:23.986 Firmware Update Granularity: No Information Provided 00:15:23.986 Per-Namespace SMART Log: No 00:15:23.986 Asymmetric Namespace Access Log Page: Not Supported 00:15:23.986 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:23.986 Command Effects Log Page: Supported 00:15:23.986 Get Log Page Extended Data: Supported 00:15:23.986 Telemetry Log Pages: Not Supported 00:15:23.986 Persistent Event Log Pages: Not Supported 00:15:23.986 Supported Log Pages Log Page: May Support 00:15:23.986 Commands Supported & Effects Log Page: Not Supported 00:15:23.986 Feature Identifiers & Effects Log Page:May Support 00:15:23.986 NVMe-MI Commands & Effects Log Page: May Support 00:15:23.986 Data Area 4 for Telemetry Log: Not Supported 00:15:23.986 Error Log Page Entries Supported: 128 00:15:23.986 Keep Alive: Supported 00:15:23.986 Keep Alive Granularity: 10000 ms 00:15:23.986 00:15:23.986 NVM Command Set Attributes 00:15:23.986 ========================== 00:15:23.986 Submission Queue Entry Size 00:15:23.986 Max: 64 00:15:23.986 Min: 64 00:15:23.986 Completion Queue Entry Size 00:15:23.986 Max: 16 00:15:23.986 Min: 16 00:15:23.986 Number of Namespaces: 32 00:15:23.986 Compare Command: Supported 00:15:23.986 Write Uncorrectable Command: Not Supported 00:15:23.986 Dataset Management Command: Supported 00:15:23.986 Write Zeroes Command: Supported 00:15:23.986 Set Features Save Field: Not Supported 00:15:23.986 Reservations: Not Supported 00:15:23.986 Timestamp: Not Supported 00:15:23.986 Copy: Supported 00:15:23.986 Volatile Write Cache: Present 00:15:23.986 Atomic Write Unit (Normal): 1 00:15:23.986 Atomic Write Unit (PFail): 1 00:15:23.986 Atomic Compare & Write Unit: 1 00:15:23.986 Fused Compare & Write: Supported 00:15:23.986 Scatter-Gather List 00:15:23.986 SGL Command Set: Supported (Dword aligned) 00:15:23.986 SGL Keyed: Not Supported 00:15:23.986 SGL Bit Bucket Descriptor: Not Supported 00:15:23.986 SGL Metadata Pointer: Not Supported 00:15:23.986 Oversized SGL: Not Supported 00:15:23.986 SGL Metadata Address: Not Supported 00:15:23.986 SGL Offset: Not Supported 00:15:23.986 Transport SGL Data Block: Not Supported 00:15:23.986 Replay Protected Memory Block: Not Supported 00:15:23.986 00:15:23.986 Firmware Slot Information 00:15:23.986 ========================= 00:15:23.986 Active slot: 1 00:15:23.986 Slot 1 Firmware Revision: 24.05 00:15:23.986 00:15:23.986 00:15:23.986 Commands Supported and Effects 00:15:23.986 ============================== 00:15:23.986 Admin Commands 00:15:23.986 -------------- 00:15:23.986 Get Log Page (02h): Supported 00:15:23.986 Identify (06h): Supported 00:15:23.986 Abort (08h): Supported 00:15:23.986 Set Features (09h): Supported 00:15:23.986 Get Features (0Ah): Supported 00:15:23.986 Asynchronous Event Request (0Ch): Supported 00:15:23.986 Keep Alive (18h): Supported 00:15:23.986 I/O Commands 00:15:23.986 ------------ 00:15:23.986 Flush (00h): Supported LBA-Change 00:15:23.986 Write (01h): Supported LBA-Change 00:15:23.986 Read (02h): Supported 00:15:23.986 Compare (05h): Supported 00:15:23.986 Write Zeroes (08h): Supported LBA-Change 00:15:23.986 Dataset Management (09h): Supported LBA-Change 00:15:23.986 Copy (19h): Supported LBA-Change 00:15:23.986 Unknown (79h): Supported LBA-Change 00:15:23.986 Unknown (7Ah): Supported 00:15:23.986 00:15:23.986 Error Log 00:15:23.986 ========= 00:15:23.986 00:15:23.986 Arbitration 00:15:23.986 =========== 00:15:23.986 Arbitration Burst: 1 00:15:23.986 00:15:23.986 Power Management 00:15:23.986 ================ 00:15:23.986 Number of Power States: 1 00:15:23.986 Current Power State: Power State #0 00:15:23.986 Power State #0: 00:15:23.986 Max Power: 0.00 W 00:15:23.986 Non-Operational State: Operational 00:15:23.986 Entry Latency: Not Reported 00:15:23.986 Exit Latency: Not Reported 00:15:23.986 Relative Read Throughput: 0 00:15:23.986 Relative Read Latency: 0 00:15:23.986 Relative Write Throughput: 0 00:15:23.986 Relative Write Latency: 0 00:15:23.986 Idle Power: Not Reported 00:15:23.986 Active Power: Not Reported 00:15:23.986 Non-Operational Permissive Mode: Not Supported 00:15:23.986 00:15:23.986 Health Information 00:15:23.986 ================== 00:15:23.986 Critical Warnings: 00:15:23.986 Available Spare Space: OK 00:15:23.986 Temperature: OK 00:15:23.986 Device Reliability: OK 00:15:23.986 Read Only: No 00:15:23.986 Volatile Memory Backup: OK 00:15:23.986 Current Temperature: 0 Kelvin (-2[2024-05-15 08:27:10.922723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:23.986 [2024-05-15 08:27:10.922732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:23.986 [2024-05-15 08:27:10.922754] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:23.986 [2024-05-15 08:27:10.922762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.986 [2024-05-15 08:27:10.922767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.986 [2024-05-15 08:27:10.922773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.986 [2024-05-15 08:27:10.922778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.986 [2024-05-15 08:27:10.922844] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:23.986 [2024-05-15 08:27:10.922853] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:23.986 [2024-05-15 08:27:10.923842] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:23.986 [2024-05-15 08:27:10.923889] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:23.986 [2024-05-15 08:27:10.923896] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:23.986 [2024-05-15 08:27:10.924852] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:23.986 [2024-05-15 08:27:10.924861] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:23.986 [2024-05-15 08:27:10.924908] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:23.986 [2024-05-15 08:27:10.926884] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:23.986 73 Celsius) 00:15:23.986 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:23.986 Available Spare: 0% 00:15:23.986 Available Spare Threshold: 0% 00:15:23.986 Life Percentage Used: 0% 00:15:23.986 Data Units Read: 0 00:15:23.986 Data Units Written: 0 00:15:23.986 Host Read Commands: 0 00:15:23.986 Host Write Commands: 0 00:15:23.986 Controller Busy Time: 0 minutes 00:15:23.986 Power Cycles: 0 00:15:23.986 Power On Hours: 0 hours 00:15:23.986 Unsafe Shutdowns: 0 00:15:23.986 Unrecoverable Media Errors: 0 00:15:23.986 Lifetime Error Log Entries: 0 00:15:23.986 Warning Temperature Time: 0 minutes 00:15:23.986 Critical Temperature Time: 0 minutes 00:15:23.986 00:15:23.986 Number of Queues 00:15:23.986 ================ 00:15:23.986 Number of I/O Submission Queues: 127 00:15:23.986 Number of I/O Completion Queues: 127 00:15:23.986 00:15:23.986 Active Namespaces 00:15:23.986 ================= 00:15:23.986 Namespace ID:1 00:15:23.986 Error Recovery Timeout: Unlimited 00:15:23.986 Command Set Identifier: NVM (00h) 00:15:23.986 Deallocate: Supported 00:15:23.986 Deallocated/Unwritten Error: Not Supported 00:15:23.986 Deallocated Read Value: Unknown 00:15:23.986 Deallocate in Write Zeroes: Not Supported 00:15:23.986 Deallocated Guard Field: 0xFFFF 00:15:23.986 Flush: Supported 00:15:23.986 Reservation: Supported 00:15:23.986 Namespace Sharing Capabilities: Multiple Controllers 00:15:23.986 Size (in LBAs): 131072 (0GiB) 00:15:23.986 Capacity (in LBAs): 131072 (0GiB) 00:15:23.986 Utilization (in LBAs): 131072 (0GiB) 00:15:23.986 NGUID: 646B7A73C27F4CFDA8F290A5657CC277 00:15:23.986 UUID: 646b7a73-c27f-4cfd-a8f2-90a5657cc277 00:15:23.986 Thin Provisioning: Not Supported 00:15:23.986 Per-NS Atomic Units: Yes 00:15:23.986 Atomic Boundary Size (Normal): 0 00:15:23.986 Atomic Boundary Size (PFail): 0 00:15:23.986 Atomic Boundary Offset: 0 00:15:23.986 Maximum Single Source Range Length: 65535 00:15:23.986 Maximum Copy Length: 65535 00:15:23.986 Maximum Source Range Count: 1 00:15:23.986 NGUID/EUI64 Never Reused: No 00:15:23.986 Namespace Write Protected: No 00:15:23.986 Number of LBA Formats: 1 00:15:23.986 Current LBA Format: LBA Format #00 00:15:23.986 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:23.986 00:15:23.987 08:27:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:23.987 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.246 [2024-05-15 08:27:11.140938] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:29.512 Initializing NVMe Controllers 00:15:29.512 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:29.512 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:29.512 Initialization complete. Launching workers. 00:15:29.512 ======================================================== 00:15:29.512 Latency(us) 00:15:29.512 Device Information : IOPS MiB/s Average min max 00:15:29.512 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39965.80 156.12 3203.31 952.31 7609.78 00:15:29.512 ======================================================== 00:15:29.512 Total : 39965.80 156.12 3203.31 952.31 7609.78 00:15:29.512 00:15:29.512 [2024-05-15 08:27:16.167186] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:29.512 08:27:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:29.512 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.512 [2024-05-15 08:27:16.389214] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:34.779 Initializing NVMe Controllers 00:15:34.779 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:34.779 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:34.779 Initialization complete. Launching workers. 00:15:34.779 ======================================================== 00:15:34.779 Latency(us) 00:15:34.779 Device Information : IOPS MiB/s Average min max 00:15:34.779 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.26 62.71 7978.30 6982.20 7998.07 00:15:34.779 ======================================================== 00:15:34.779 Total : 16054.26 62.71 7978.30 6982.20 7998.07 00:15:34.779 00:15:34.779 [2024-05-15 08:27:21.430908] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:34.779 08:27:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:34.779 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.779 [2024-05-15 08:27:21.629916] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:40.042 [2024-05-15 08:27:26.722550] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:40.042 Initializing NVMe Controllers 00:15:40.042 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:40.042 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:40.042 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:40.042 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:40.042 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:40.042 Initialization complete. Launching workers. 00:15:40.042 Starting thread on core 2 00:15:40.042 Starting thread on core 3 00:15:40.042 Starting thread on core 1 00:15:40.042 08:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:40.042 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.043 [2024-05-15 08:27:26.998556] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:43.334 [2024-05-15 08:27:30.056837] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:43.334 Initializing NVMe Controllers 00:15:43.334 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:43.334 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:43.334 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:43.334 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:43.334 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:43.334 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:43.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:43.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:43.334 Initialization complete. Launching workers. 00:15:43.334 Starting thread on core 1 with urgent priority queue 00:15:43.334 Starting thread on core 2 with urgent priority queue 00:15:43.334 Starting thread on core 3 with urgent priority queue 00:15:43.334 Starting thread on core 0 with urgent priority queue 00:15:43.334 SPDK bdev Controller (SPDK1 ) core 0: 8912.00 IO/s 11.22 secs/100000 ios 00:15:43.334 SPDK bdev Controller (SPDK1 ) core 1: 7990.33 IO/s 12.52 secs/100000 ios 00:15:43.334 SPDK bdev Controller (SPDK1 ) core 2: 10520.67 IO/s 9.51 secs/100000 ios 00:15:43.334 SPDK bdev Controller (SPDK1 ) core 3: 7520.33 IO/s 13.30 secs/100000 ios 00:15:43.334 ======================================================== 00:15:43.334 00:15:43.334 08:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:43.334 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.334 [2024-05-15 08:27:30.330594] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:43.591 Initializing NVMe Controllers 00:15:43.591 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:43.591 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:43.591 Namespace ID: 1 size: 0GB 00:15:43.591 Initialization complete. 00:15:43.591 INFO: using host memory buffer for IO 00:15:43.591 Hello world! 00:15:43.591 [2024-05-15 08:27:30.367824] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:43.591 08:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:43.591 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.848 [2024-05-15 08:27:30.629886] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:44.782 Initializing NVMe Controllers 00:15:44.782 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:44.782 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:44.782 Initialization complete. Launching workers. 00:15:44.782 submit (in ns) avg, min, max = 6640.7, 3229.6, 4000957.4 00:15:44.782 complete (in ns) avg, min, max = 21172.2, 1766.1, 6988374.8 00:15:44.782 00:15:44.782 Submit histogram 00:15:44.782 ================ 00:15:44.782 Range in us Cumulative Count 00:15:44.782 3.228 - 3.242: 0.0122% ( 2) 00:15:44.782 3.242 - 3.256: 0.0367% ( 4) 00:15:44.782 3.256 - 3.270: 0.0612% ( 4) 00:15:44.782 3.270 - 3.283: 0.1286% ( 11) 00:15:44.782 3.283 - 3.297: 1.0534% ( 151) 00:15:44.782 3.297 - 3.311: 4.2810% ( 527) 00:15:44.782 3.311 - 3.325: 8.7825% ( 735) 00:15:44.782 3.325 - 3.339: 14.0127% ( 854) 00:15:44.782 3.339 - 3.353: 20.0147% ( 980) 00:15:44.782 3.353 - 3.367: 26.1636% ( 1004) 00:15:44.782 3.367 - 3.381: 32.0370% ( 959) 00:15:44.782 3.381 - 3.395: 37.9348% ( 963) 00:15:44.782 3.395 - 3.409: 43.0426% ( 834) 00:15:44.782 3.409 - 3.423: 47.2317% ( 684) 00:15:44.782 3.423 - 3.437: 51.3902% ( 679) 00:15:44.782 3.437 - 3.450: 57.3555% ( 974) 00:15:44.782 3.450 - 3.464: 64.0434% ( 1092) 00:15:44.782 3.464 - 3.478: 68.3427% ( 702) 00:15:44.782 3.478 - 3.492: 73.1810% ( 790) 00:15:44.782 3.492 - 3.506: 78.2215% ( 823) 00:15:44.782 3.506 - 3.520: 81.7246% ( 572) 00:15:44.782 3.520 - 3.534: 84.1316% ( 393) 00:15:44.782 3.534 - 3.548: 85.7790% ( 269) 00:15:44.782 3.548 - 3.562: 86.6916% ( 149) 00:15:44.782 3.562 - 3.590: 87.7572% ( 174) 00:15:44.782 3.590 - 3.617: 89.0250% ( 207) 00:15:44.782 3.617 - 3.645: 90.8256% ( 294) 00:15:44.782 3.645 - 3.673: 92.5343% ( 279) 00:15:44.782 3.673 - 3.701: 94.1144% ( 258) 00:15:44.782 3.701 - 3.729: 95.9885% ( 306) 00:15:44.782 3.729 - 3.757: 97.3726% ( 226) 00:15:44.782 3.757 - 3.784: 98.2974% ( 151) 00:15:44.782 3.784 - 3.812: 98.8854% ( 96) 00:15:44.782 3.812 - 3.840: 99.1916% ( 50) 00:15:44.782 3.840 - 3.868: 99.4121% ( 36) 00:15:44.782 3.868 - 3.896: 99.5100% ( 16) 00:15:44.782 3.896 - 3.923: 99.5284% ( 3) 00:15:44.782 3.923 - 3.951: 99.5590% ( 5) 00:15:44.782 3.951 - 3.979: 99.5713% ( 2) 00:15:44.782 3.979 - 4.007: 99.5835% ( 2) 00:15:44.782 4.007 - 4.035: 99.6019% ( 3) 00:15:44.783 4.035 - 4.063: 99.6142% ( 2) 00:15:44.783 4.063 - 4.090: 99.6203% ( 1) 00:15:44.783 4.090 - 4.118: 99.6325% ( 2) 00:15:44.783 4.118 - 4.146: 99.6387% ( 1) 00:15:44.783 4.146 - 4.174: 99.6448% ( 1) 00:15:44.783 4.174 - 4.202: 99.6570% ( 2) 00:15:44.783 4.369 - 4.397: 99.6632% ( 1) 00:15:44.783 4.480 - 4.508: 99.6693% ( 1) 00:15:44.783 5.259 - 5.287: 99.6754% ( 1) 00:15:44.783 5.370 - 5.398: 99.6815% ( 1) 00:15:44.783 5.537 - 5.565: 99.6877% ( 1) 00:15:44.783 5.649 - 5.677: 99.6999% ( 2) 00:15:44.783 5.899 - 5.927: 99.7060% ( 1) 00:15:44.783 6.177 - 6.205: 99.7122% ( 1) 00:15:44.783 6.289 - 6.317: 99.7244% ( 2) 00:15:44.783 6.400 - 6.428: 99.7305% ( 1) 00:15:44.783 6.539 - 6.567: 99.7366% ( 1) 00:15:44.783 6.623 - 6.650: 99.7489% ( 2) 00:15:44.783 6.790 - 6.817: 99.7550% ( 1) 00:15:44.783 6.845 - 6.873: 99.7611% ( 1) 00:15:44.783 7.012 - 7.040: 99.7734% ( 2) 00:15:44.783 7.096 - 7.123: 99.7795% ( 1) 00:15:44.783 7.123 - 7.179: 99.7856% ( 1) 00:15:44.783 7.179 - 7.235: 99.7918% ( 1) 00:15:44.783 7.235 - 7.290: 99.7979% ( 1) 00:15:44.783 7.513 - 7.569: 99.8040% ( 1) 00:15:44.783 7.624 - 7.680: 99.8224% ( 3) 00:15:44.783 7.736 - 7.791: 99.8285% ( 1) 00:15:44.783 7.791 - 7.847: 99.8408% ( 2) 00:15:44.783 7.903 - 7.958: 99.8469% ( 1) 00:15:44.783 7.958 - 8.014: 99.8591% ( 2) 00:15:44.783 8.125 - 8.181: 99.8714% ( 2) 00:15:44.783 8.626 - 8.682: 99.8775% ( 1) 00:15:44.783 8.682 - 8.737: 99.8836% ( 1) 00:15:44.783 8.793 - 8.849: 99.8898% ( 1) 00:15:44.783 9.016 - 9.071: 99.8959% ( 1) 00:15:44.783 9.739 - 9.795: 99.9020% ( 1) 00:15:44.783 9.850 - 9.906: 99.9081% ( 1) 00:15:44.783 [2024-05-15 08:27:31.648896] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:44.783 10.129 - 10.184: 99.9143% ( 1) 00:15:44.783 10.630 - 10.685: 99.9204% ( 1) 00:15:44.783 3989.148 - 4017.642: 100.0000% ( 13) 00:15:44.783 00:15:44.783 Complete histogram 00:15:44.783 ================== 00:15:44.783 Range in us Cumulative Count 00:15:44.783 1.760 - 1.767: 0.0061% ( 1) 00:15:44.783 1.767 - 1.774: 0.0122% ( 1) 00:15:44.783 1.774 - 1.781: 0.1409% ( 21) 00:15:44.783 1.781 - 1.795: 0.2327% ( 15) 00:15:44.783 1.795 - 1.809: 0.2511% ( 3) 00:15:44.783 1.809 - 1.823: 4.4892% ( 692) 00:15:44.783 1.823 - 1.837: 50.1470% ( 7455) 00:15:44.783 1.837 - 1.850: 75.2266% ( 4095) 00:15:44.783 1.850 - 1.864: 79.4525% ( 690) 00:15:44.783 1.864 - 1.878: 88.9576% ( 1552) 00:15:44.783 1.878 - 1.892: 93.1284% ( 681) 00:15:44.783 1.892 - 1.906: 95.5537% ( 396) 00:15:44.783 1.906 - 1.920: 97.6972% ( 350) 00:15:44.783 1.920 - 1.934: 98.3831% ( 112) 00:15:44.783 1.934 - 1.948: 98.6710% ( 47) 00:15:44.783 1.948 - 1.962: 98.8731% ( 33) 00:15:44.783 1.962 - 1.976: 98.9956% ( 20) 00:15:44.783 1.976 - 1.990: 99.0752% ( 13) 00:15:44.783 1.990 - 2.003: 99.1058% ( 5) 00:15:44.783 2.003 - 2.017: 99.1426% ( 6) 00:15:44.783 2.017 - 2.031: 99.1548% ( 2) 00:15:44.783 2.031 - 2.045: 99.1732% ( 3) 00:15:44.783 2.045 - 2.059: 99.1916% ( 3) 00:15:44.783 2.059 - 2.073: 99.2099% ( 3) 00:15:44.783 2.073 - 2.087: 99.2222% ( 2) 00:15:44.783 2.101 - 2.115: 99.2283% ( 1) 00:15:44.783 2.143 - 2.157: 99.2344% ( 1) 00:15:44.783 2.157 - 2.170: 99.2467% ( 2) 00:15:44.783 2.170 - 2.184: 99.2528% ( 1) 00:15:44.783 2.198 - 2.212: 99.2651% ( 2) 00:15:44.783 2.240 - 2.254: 99.2712% ( 1) 00:15:44.783 2.254 - 2.268: 99.2834% ( 2) 00:15:44.783 2.282 - 2.296: 99.2896% ( 1) 00:15:44.783 2.296 - 2.310: 99.2957% ( 1) 00:15:44.783 2.310 - 2.323: 99.3018% ( 1) 00:15:44.783 2.323 - 2.337: 99.3079% ( 1) 00:15:44.783 2.351 - 2.365: 99.3202% ( 2) 00:15:44.783 4.369 - 4.397: 99.3263% ( 1) 00:15:44.783 4.536 - 4.563: 99.3324% ( 1) 00:15:44.783 4.758 - 4.786: 99.3386% ( 1) 00:15:44.783 4.925 - 4.953: 99.3447% ( 1) 00:15:44.783 4.953 - 4.981: 99.3508% ( 1) 00:15:44.783 5.037 - 5.064: 99.3569% ( 1) 00:15:44.783 5.176 - 5.203: 99.3631% ( 1) 00:15:44.783 5.203 - 5.231: 99.3692% ( 1) 00:15:44.783 5.231 - 5.259: 99.3753% ( 1) 00:15:44.783 5.259 - 5.287: 99.3814% ( 1) 00:15:44.783 5.315 - 5.343: 99.3876% ( 1) 00:15:44.783 5.426 - 5.454: 99.3937% ( 1) 00:15:44.783 5.454 - 5.482: 99.3998% ( 1) 00:15:44.783 5.482 - 5.510: 99.4059% ( 1) 00:15:44.783 5.510 - 5.537: 99.4121% ( 1) 00:15:44.783 5.537 - 5.565: 99.4182% ( 1) 00:15:44.783 5.621 - 5.649: 99.4304% ( 2) 00:15:44.783 5.649 - 5.677: 99.4427% ( 2) 00:15:44.783 5.760 - 5.788: 99.4488% ( 1) 00:15:44.783 5.816 - 5.843: 99.4549% ( 1) 00:15:44.783 5.983 - 6.010: 99.4610% ( 1) 00:15:44.783 6.010 - 6.038: 99.4672% ( 1) 00:15:44.783 6.038 - 6.066: 99.4794% ( 2) 00:15:44.783 6.539 - 6.567: 99.4855% ( 1) 00:15:44.783 6.650 - 6.678: 99.4917% ( 1) 00:15:44.783 6.706 - 6.734: 99.4978% ( 1) 00:15:44.783 6.734 - 6.762: 99.5039% ( 1) 00:15:44.783 8.014 - 8.070: 99.5100% ( 1) 00:15:44.783 9.016 - 9.071: 99.5162% ( 1) 00:15:44.783 1004.410 - 1011.534: 99.5223% ( 1) 00:15:44.783 3675.715 - 3704.209: 99.5284% ( 1) 00:15:44.783 3989.148 - 4017.642: 99.9878% ( 75) 00:15:44.783 4131.617 - 4160.111: 99.9939% ( 1) 00:15:44.783 6981.009 - 7009.503: 100.0000% ( 1) 00:15:44.783 00:15:44.783 08:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:44.783 08:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:44.783 08:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:44.783 08:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:44.783 08:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:45.062 [ 00:15:45.062 { 00:15:45.062 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:45.062 "subtype": "Discovery", 00:15:45.062 "listen_addresses": [], 00:15:45.062 "allow_any_host": true, 00:15:45.062 "hosts": [] 00:15:45.062 }, 00:15:45.062 { 00:15:45.062 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:45.062 "subtype": "NVMe", 00:15:45.062 "listen_addresses": [ 00:15:45.062 { 00:15:45.062 "trtype": "VFIOUSER", 00:15:45.062 "adrfam": "IPv4", 00:15:45.062 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:45.062 "trsvcid": "0" 00:15:45.062 } 00:15:45.062 ], 00:15:45.062 "allow_any_host": true, 00:15:45.062 "hosts": [], 00:15:45.062 "serial_number": "SPDK1", 00:15:45.062 "model_number": "SPDK bdev Controller", 00:15:45.062 "max_namespaces": 32, 00:15:45.062 "min_cntlid": 1, 00:15:45.062 "max_cntlid": 65519, 00:15:45.062 "namespaces": [ 00:15:45.062 { 00:15:45.062 "nsid": 1, 00:15:45.062 "bdev_name": "Malloc1", 00:15:45.062 "name": "Malloc1", 00:15:45.062 "nguid": "646B7A73C27F4CFDA8F290A5657CC277", 00:15:45.062 "uuid": "646b7a73-c27f-4cfd-a8f2-90a5657cc277" 00:15:45.062 } 00:15:45.062 ] 00:15:45.062 }, 00:15:45.062 { 00:15:45.062 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:45.062 "subtype": "NVMe", 00:15:45.062 "listen_addresses": [ 00:15:45.062 { 00:15:45.062 "trtype": "VFIOUSER", 00:15:45.062 "adrfam": "IPv4", 00:15:45.062 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:45.062 "trsvcid": "0" 00:15:45.062 } 00:15:45.062 ], 00:15:45.062 "allow_any_host": true, 00:15:45.062 "hosts": [], 00:15:45.062 "serial_number": "SPDK2", 00:15:45.062 "model_number": "SPDK bdev Controller", 00:15:45.062 "max_namespaces": 32, 00:15:45.062 "min_cntlid": 1, 00:15:45.062 "max_cntlid": 65519, 00:15:45.062 "namespaces": [ 00:15:45.062 { 00:15:45.062 "nsid": 1, 00:15:45.062 "bdev_name": "Malloc2", 00:15:45.062 "name": "Malloc2", 00:15:45.062 "nguid": "09DDC79AF5C746FD83FEDC3DE2DC0923", 00:15:45.062 "uuid": "09ddc79a-f5c7-46fd-83fe-dc3de2dc0923" 00:15:45.062 } 00:15:45.062 ] 00:15:45.062 } 00:15:45.062 ] 00:15:45.062 08:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:45.062 08:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=244504 00:15:45.062 08:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:45.062 08:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:45.062 08:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:45.062 08:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:45.062 08:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:15:45.062 08:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=1 00:15:45.062 08:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:15:45.062 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.062 08:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:45.062 08:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:15:45.062 08:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=2 00:15:45.062 08:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:15:45.062 [2024-05-15 08:27:32.020710] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:45.062 08:27:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:45.062 08:27:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:45.062 08:27:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:45.062 08:27:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:45.062 08:27:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:45.320 Malloc3 00:15:45.320 08:27:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:45.578 [2024-05-15 08:27:32.445858] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:45.578 08:27:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:45.578 Asynchronous Event Request test 00:15:45.578 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:45.578 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:45.578 Registering asynchronous event callbacks... 00:15:45.578 Starting namespace attribute notice tests for all controllers... 00:15:45.578 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:45.578 aer_cb - Changed Namespace 00:15:45.578 Cleaning up... 00:15:45.837 [ 00:15:45.837 { 00:15:45.837 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:45.837 "subtype": "Discovery", 00:15:45.837 "listen_addresses": [], 00:15:45.837 "allow_any_host": true, 00:15:45.837 "hosts": [] 00:15:45.837 }, 00:15:45.837 { 00:15:45.837 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:45.837 "subtype": "NVMe", 00:15:45.837 "listen_addresses": [ 00:15:45.837 { 00:15:45.837 "trtype": "VFIOUSER", 00:15:45.837 "adrfam": "IPv4", 00:15:45.837 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:45.837 "trsvcid": "0" 00:15:45.837 } 00:15:45.837 ], 00:15:45.837 "allow_any_host": true, 00:15:45.837 "hosts": [], 00:15:45.837 "serial_number": "SPDK1", 00:15:45.837 "model_number": "SPDK bdev Controller", 00:15:45.837 "max_namespaces": 32, 00:15:45.837 "min_cntlid": 1, 00:15:45.837 "max_cntlid": 65519, 00:15:45.837 "namespaces": [ 00:15:45.837 { 00:15:45.837 "nsid": 1, 00:15:45.837 "bdev_name": "Malloc1", 00:15:45.837 "name": "Malloc1", 00:15:45.837 "nguid": "646B7A73C27F4CFDA8F290A5657CC277", 00:15:45.837 "uuid": "646b7a73-c27f-4cfd-a8f2-90a5657cc277" 00:15:45.837 }, 00:15:45.837 { 00:15:45.837 "nsid": 2, 00:15:45.837 "bdev_name": "Malloc3", 00:15:45.837 "name": "Malloc3", 00:15:45.837 "nguid": "88400125406047FC8560A3D92497340C", 00:15:45.837 "uuid": "88400125-4060-47fc-8560-a3d92497340c" 00:15:45.837 } 00:15:45.837 ] 00:15:45.837 }, 00:15:45.837 { 00:15:45.837 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:45.837 "subtype": "NVMe", 00:15:45.837 "listen_addresses": [ 00:15:45.837 { 00:15:45.837 "trtype": "VFIOUSER", 00:15:45.837 "adrfam": "IPv4", 00:15:45.837 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:45.837 "trsvcid": "0" 00:15:45.837 } 00:15:45.837 ], 00:15:45.837 "allow_any_host": true, 00:15:45.837 "hosts": [], 00:15:45.837 "serial_number": "SPDK2", 00:15:45.837 "model_number": "SPDK bdev Controller", 00:15:45.837 "max_namespaces": 32, 00:15:45.837 "min_cntlid": 1, 00:15:45.837 "max_cntlid": 65519, 00:15:45.837 "namespaces": [ 00:15:45.837 { 00:15:45.837 "nsid": 1, 00:15:45.837 "bdev_name": "Malloc2", 00:15:45.837 "name": "Malloc2", 00:15:45.837 "nguid": "09DDC79AF5C746FD83FEDC3DE2DC0923", 00:15:45.837 "uuid": "09ddc79a-f5c7-46fd-83fe-dc3de2dc0923" 00:15:45.837 } 00:15:45.837 ] 00:15:45.837 } 00:15:45.837 ] 00:15:45.837 08:27:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 244504 00:15:45.837 08:27:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:45.837 08:27:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:45.837 08:27:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:45.837 08:27:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:45.837 [2024-05-15 08:27:32.685724] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:15:45.837 [2024-05-15 08:27:32.685756] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244732 ] 00:15:45.837 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.837 [2024-05-15 08:27:32.714563] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:45.837 [2024-05-15 08:27:32.722395] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:45.837 [2024-05-15 08:27:32.722419] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdd82de7000 00:15:45.837 [2024-05-15 08:27:32.723390] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:45.837 [2024-05-15 08:27:32.724401] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:45.837 [2024-05-15 08:27:32.725413] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:45.838 [2024-05-15 08:27:32.726415] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:45.838 [2024-05-15 08:27:32.727423] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:45.838 [2024-05-15 08:27:32.728430] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:45.838 [2024-05-15 08:27:32.729445] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:45.838 [2024-05-15 08:27:32.730450] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:45.838 [2024-05-15 08:27:32.731467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:45.838 [2024-05-15 08:27:32.731479] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdd82ddc000 00:15:45.838 [2024-05-15 08:27:32.732592] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:45.838 [2024-05-15 08:27:32.747305] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:45.838 [2024-05-15 08:27:32.747324] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:45.838 [2024-05-15 08:27:32.749373] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:45.838 [2024-05-15 08:27:32.749410] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:45.838 [2024-05-15 08:27:32.749478] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:45.838 [2024-05-15 08:27:32.749490] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:45.838 [2024-05-15 08:27:32.749495] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:45.838 [2024-05-15 08:27:32.750378] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:45.838 [2024-05-15 08:27:32.750386] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:45.838 [2024-05-15 08:27:32.750392] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:45.838 [2024-05-15 08:27:32.751386] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:45.838 [2024-05-15 08:27:32.751394] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:45.838 [2024-05-15 08:27:32.751400] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:45.838 [2024-05-15 08:27:32.752393] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:45.838 [2024-05-15 08:27:32.752403] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:45.838 [2024-05-15 08:27:32.753394] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:45.838 [2024-05-15 08:27:32.753401] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:45.838 [2024-05-15 08:27:32.753406] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:45.838 [2024-05-15 08:27:32.753411] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:45.838 [2024-05-15 08:27:32.753516] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:45.838 [2024-05-15 08:27:32.753520] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:45.838 [2024-05-15 08:27:32.753525] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:45.838 [2024-05-15 08:27:32.758169] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:45.838 [2024-05-15 08:27:32.758431] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:45.838 [2024-05-15 08:27:32.759439] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:45.838 [2024-05-15 08:27:32.760437] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:45.838 [2024-05-15 08:27:32.760475] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:45.838 [2024-05-15 08:27:32.761450] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:45.838 [2024-05-15 08:27:32.761458] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:45.838 [2024-05-15 08:27:32.761462] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:45.838 [2024-05-15 08:27:32.761479] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:45.838 [2024-05-15 08:27:32.761489] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:45.838 [2024-05-15 08:27:32.761500] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:45.838 [2024-05-15 08:27:32.761504] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:45.838 [2024-05-15 08:27:32.761515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:45.838 [2024-05-15 08:27:32.769172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:45.838 [2024-05-15 08:27:32.769183] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:45.838 [2024-05-15 08:27:32.769187] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:45.838 [2024-05-15 08:27:32.769191] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:45.838 [2024-05-15 08:27:32.769198] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:45.838 [2024-05-15 08:27:32.769202] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:45.838 [2024-05-15 08:27:32.769206] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:45.838 [2024-05-15 08:27:32.769210] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:45.838 [2024-05-15 08:27:32.769219] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:45.838 [2024-05-15 08:27:32.769229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:45.838 [2024-05-15 08:27:32.777170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:45.838 [2024-05-15 08:27:32.777181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.838 [2024-05-15 08:27:32.777188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.838 [2024-05-15 08:27:32.777196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.838 [2024-05-15 08:27:32.777203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.838 [2024-05-15 08:27:32.777207] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:45.838 [2024-05-15 08:27:32.777215] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:45.838 [2024-05-15 08:27:32.777223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:45.838 [2024-05-15 08:27:32.785171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:45.838 [2024-05-15 08:27:32.785178] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:45.838 [2024-05-15 08:27:32.785183] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:45.838 [2024-05-15 08:27:32.785189] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:45.838 [2024-05-15 08:27:32.785196] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:45.838 [2024-05-15 08:27:32.785203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:45.838 [2024-05-15 08:27:32.793170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:45.838 [2024-05-15 08:27:32.793216] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:45.838 [2024-05-15 08:27:32.793223] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:45.838 [2024-05-15 08:27:32.793230] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:45.838 [2024-05-15 08:27:32.793237] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:45.838 [2024-05-15 08:27:32.793243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:45.838 [2024-05-15 08:27:32.801173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:45.838 [2024-05-15 08:27:32.801186] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:45.838 [2024-05-15 08:27:32.801194] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:45.838 [2024-05-15 08:27:32.801201] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:45.838 [2024-05-15 08:27:32.801207] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:45.838 [2024-05-15 08:27:32.801211] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:45.838 [2024-05-15 08:27:32.801216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:45.839 [2024-05-15 08:27:32.809172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:45.839 [2024-05-15 08:27:32.809185] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:45.839 [2024-05-15 08:27:32.809191] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:45.839 [2024-05-15 08:27:32.809198] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:45.839 [2024-05-15 08:27:32.809202] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:45.839 [2024-05-15 08:27:32.809207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:45.839 [2024-05-15 08:27:32.817171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:45.839 [2024-05-15 08:27:32.817186] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:45.839 [2024-05-15 08:27:32.817193] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:45.839 [2024-05-15 08:27:32.817202] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:45.839 [2024-05-15 08:27:32.817207] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:45.839 [2024-05-15 08:27:32.817212] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:45.839 [2024-05-15 08:27:32.817216] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:45.839 [2024-05-15 08:27:32.817220] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:45.839 [2024-05-15 08:27:32.817226] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:45.839 [2024-05-15 08:27:32.817244] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:45.839 [2024-05-15 08:27:32.825172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:45.839 [2024-05-15 08:27:32.825188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:45.839 [2024-05-15 08:27:32.833171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:45.839 [2024-05-15 08:27:32.833183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:45.839 [2024-05-15 08:27:32.841171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:45.839 [2024-05-15 08:27:32.841182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:45.839 [2024-05-15 08:27:32.849170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:45.839 [2024-05-15 08:27:32.849181] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:45.839 [2024-05-15 08:27:32.849186] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:45.839 [2024-05-15 08:27:32.849189] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:45.839 [2024-05-15 08:27:32.849192] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:45.839 [2024-05-15 08:27:32.849197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:45.839 [2024-05-15 08:27:32.849203] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:45.839 [2024-05-15 08:27:32.849207] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:45.839 [2024-05-15 08:27:32.849213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:45.839 [2024-05-15 08:27:32.849218] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:45.839 [2024-05-15 08:27:32.849222] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:45.839 [2024-05-15 08:27:32.849228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:45.839 [2024-05-15 08:27:32.849236] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:45.839 [2024-05-15 08:27:32.849240] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:45.839 [2024-05-15 08:27:32.849246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:45.839 [2024-05-15 08:27:32.857175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:45.839 [2024-05-15 08:27:32.857197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:45.839 [2024-05-15 08:27:32.857205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:45.839 [2024-05-15 08:27:32.857213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:45.839 ===================================================== 00:15:45.839 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:45.839 ===================================================== 00:15:45.839 Controller Capabilities/Features 00:15:45.839 ================================ 00:15:45.839 Vendor ID: 4e58 00:15:45.839 Subsystem Vendor ID: 4e58 00:15:45.839 Serial Number: SPDK2 00:15:45.839 Model Number: SPDK bdev Controller 00:15:45.839 Firmware Version: 24.05 00:15:45.839 Recommended Arb Burst: 6 00:15:45.839 IEEE OUI Identifier: 8d 6b 50 00:15:45.839 Multi-path I/O 00:15:45.839 May have multiple subsystem ports: Yes 00:15:45.839 May have multiple controllers: Yes 00:15:45.839 Associated with SR-IOV VF: No 00:15:45.839 Max Data Transfer Size: 131072 00:15:45.839 Max Number of Namespaces: 32 00:15:45.839 Max Number of I/O Queues: 127 00:15:45.839 NVMe Specification Version (VS): 1.3 00:15:45.839 NVMe Specification Version (Identify): 1.3 00:15:45.839 Maximum Queue Entries: 256 00:15:45.839 Contiguous Queues Required: Yes 00:15:45.839 Arbitration Mechanisms Supported 00:15:45.839 Weighted Round Robin: Not Supported 00:15:45.839 Vendor Specific: Not Supported 00:15:45.839 Reset Timeout: 15000 ms 00:15:45.839 Doorbell Stride: 4 bytes 00:15:45.839 NVM Subsystem Reset: Not Supported 00:15:45.839 Command Sets Supported 00:15:45.839 NVM Command Set: Supported 00:15:45.839 Boot Partition: Not Supported 00:15:45.839 Memory Page Size Minimum: 4096 bytes 00:15:45.839 Memory Page Size Maximum: 4096 bytes 00:15:45.839 Persistent Memory Region: Not Supported 00:15:45.839 Optional Asynchronous Events Supported 00:15:45.839 Namespace Attribute Notices: Supported 00:15:45.839 Firmware Activation Notices: Not Supported 00:15:45.839 ANA Change Notices: Not Supported 00:15:45.839 PLE Aggregate Log Change Notices: Not Supported 00:15:45.839 LBA Status Info Alert Notices: Not Supported 00:15:45.839 EGE Aggregate Log Change Notices: Not Supported 00:15:45.839 Normal NVM Subsystem Shutdown event: Not Supported 00:15:45.839 Zone Descriptor Change Notices: Not Supported 00:15:45.839 Discovery Log Change Notices: Not Supported 00:15:45.839 Controller Attributes 00:15:45.839 128-bit Host Identifier: Supported 00:15:45.839 Non-Operational Permissive Mode: Not Supported 00:15:45.839 NVM Sets: Not Supported 00:15:45.839 Read Recovery Levels: Not Supported 00:15:45.839 Endurance Groups: Not Supported 00:15:45.839 Predictable Latency Mode: Not Supported 00:15:45.839 Traffic Based Keep ALive: Not Supported 00:15:45.839 Namespace Granularity: Not Supported 00:15:45.839 SQ Associations: Not Supported 00:15:45.839 UUID List: Not Supported 00:15:45.839 Multi-Domain Subsystem: Not Supported 00:15:45.839 Fixed Capacity Management: Not Supported 00:15:45.839 Variable Capacity Management: Not Supported 00:15:45.839 Delete Endurance Group: Not Supported 00:15:45.839 Delete NVM Set: Not Supported 00:15:45.839 Extended LBA Formats Supported: Not Supported 00:15:45.839 Flexible Data Placement Supported: Not Supported 00:15:45.839 00:15:45.839 Controller Memory Buffer Support 00:15:45.839 ================================ 00:15:45.839 Supported: No 00:15:45.839 00:15:45.840 Persistent Memory Region Support 00:15:45.840 ================================ 00:15:45.840 Supported: No 00:15:45.840 00:15:45.840 Admin Command Set Attributes 00:15:45.840 ============================ 00:15:45.840 Security Send/Receive: Not Supported 00:15:45.840 Format NVM: Not Supported 00:15:45.840 Firmware Activate/Download: Not Supported 00:15:45.840 Namespace Management: Not Supported 00:15:45.840 Device Self-Test: Not Supported 00:15:45.840 Directives: Not Supported 00:15:45.840 NVMe-MI: Not Supported 00:15:45.840 Virtualization Management: Not Supported 00:15:45.840 Doorbell Buffer Config: Not Supported 00:15:45.840 Get LBA Status Capability: Not Supported 00:15:45.840 Command & Feature Lockdown Capability: Not Supported 00:15:45.840 Abort Command Limit: 4 00:15:45.840 Async Event Request Limit: 4 00:15:45.840 Number of Firmware Slots: N/A 00:15:45.840 Firmware Slot 1 Read-Only: N/A 00:15:45.840 Firmware Activation Without Reset: N/A 00:15:45.840 Multiple Update Detection Support: N/A 00:15:45.840 Firmware Update Granularity: No Information Provided 00:15:45.840 Per-Namespace SMART Log: No 00:15:45.840 Asymmetric Namespace Access Log Page: Not Supported 00:15:45.840 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:45.840 Command Effects Log Page: Supported 00:15:45.840 Get Log Page Extended Data: Supported 00:15:45.840 Telemetry Log Pages: Not Supported 00:15:45.840 Persistent Event Log Pages: Not Supported 00:15:45.840 Supported Log Pages Log Page: May Support 00:15:45.840 Commands Supported & Effects Log Page: Not Supported 00:15:45.840 Feature Identifiers & Effects Log Page:May Support 00:15:45.840 NVMe-MI Commands & Effects Log Page: May Support 00:15:45.840 Data Area 4 for Telemetry Log: Not Supported 00:15:45.840 Error Log Page Entries Supported: 128 00:15:45.840 Keep Alive: Supported 00:15:45.840 Keep Alive Granularity: 10000 ms 00:15:45.840 00:15:45.840 NVM Command Set Attributes 00:15:45.840 ========================== 00:15:45.840 Submission Queue Entry Size 00:15:45.840 Max: 64 00:15:45.840 Min: 64 00:15:45.840 Completion Queue Entry Size 00:15:45.840 Max: 16 00:15:45.840 Min: 16 00:15:45.840 Number of Namespaces: 32 00:15:45.840 Compare Command: Supported 00:15:45.840 Write Uncorrectable Command: Not Supported 00:15:45.840 Dataset Management Command: Supported 00:15:45.840 Write Zeroes Command: Supported 00:15:45.840 Set Features Save Field: Not Supported 00:15:45.840 Reservations: Not Supported 00:15:45.840 Timestamp: Not Supported 00:15:45.840 Copy: Supported 00:15:45.840 Volatile Write Cache: Present 00:15:45.840 Atomic Write Unit (Normal): 1 00:15:45.840 Atomic Write Unit (PFail): 1 00:15:45.840 Atomic Compare & Write Unit: 1 00:15:45.840 Fused Compare & Write: Supported 00:15:45.840 Scatter-Gather List 00:15:45.840 SGL Command Set: Supported (Dword aligned) 00:15:45.840 SGL Keyed: Not Supported 00:15:45.840 SGL Bit Bucket Descriptor: Not Supported 00:15:45.840 SGL Metadata Pointer: Not Supported 00:15:45.840 Oversized SGL: Not Supported 00:15:45.840 SGL Metadata Address: Not Supported 00:15:45.840 SGL Offset: Not Supported 00:15:45.840 Transport SGL Data Block: Not Supported 00:15:45.840 Replay Protected Memory Block: Not Supported 00:15:45.840 00:15:45.840 Firmware Slot Information 00:15:45.840 ========================= 00:15:45.840 Active slot: 1 00:15:45.840 Slot 1 Firmware Revision: 24.05 00:15:45.840 00:15:45.840 00:15:45.840 Commands Supported and Effects 00:15:45.840 ============================== 00:15:45.840 Admin Commands 00:15:45.840 -------------- 00:15:45.840 Get Log Page (02h): Supported 00:15:45.840 Identify (06h): Supported 00:15:45.840 Abort (08h): Supported 00:15:45.840 Set Features (09h): Supported 00:15:45.840 Get Features (0Ah): Supported 00:15:45.840 Asynchronous Event Request (0Ch): Supported 00:15:45.840 Keep Alive (18h): Supported 00:15:45.840 I/O Commands 00:15:45.840 ------------ 00:15:45.840 Flush (00h): Supported LBA-Change 00:15:45.840 Write (01h): Supported LBA-Change 00:15:45.840 Read (02h): Supported 00:15:45.840 Compare (05h): Supported 00:15:45.840 Write Zeroes (08h): Supported LBA-Change 00:15:45.840 Dataset Management (09h): Supported LBA-Change 00:15:45.840 Copy (19h): Supported LBA-Change 00:15:45.840 Unknown (79h): Supported LBA-Change 00:15:45.840 Unknown (7Ah): Supported 00:15:45.840 00:15:45.840 Error Log 00:15:45.840 ========= 00:15:45.840 00:15:45.840 Arbitration 00:15:45.840 =========== 00:15:45.840 Arbitration Burst: 1 00:15:45.840 00:15:45.840 Power Management 00:15:45.840 ================ 00:15:45.840 Number of Power States: 1 00:15:45.840 Current Power State: Power State #0 00:15:45.840 Power State #0: 00:15:45.840 Max Power: 0.00 W 00:15:45.840 Non-Operational State: Operational 00:15:45.840 Entry Latency: Not Reported 00:15:45.840 Exit Latency: Not Reported 00:15:45.840 Relative Read Throughput: 0 00:15:45.840 Relative Read Latency: 0 00:15:45.840 Relative Write Throughput: 0 00:15:45.840 Relative Write Latency: 0 00:15:45.840 Idle Power: Not Reported 00:15:45.840 Active Power: Not Reported 00:15:45.840 Non-Operational Permissive Mode: Not Supported 00:15:45.840 00:15:45.840 Health Information 00:15:45.840 ================== 00:15:45.840 Critical Warnings: 00:15:45.840 Available Spare Space: OK 00:15:45.840 Temperature: OK 00:15:45.840 Device Reliability: OK 00:15:45.840 Read Only: No 00:15:45.840 Volatile Memory Backup: OK 00:15:45.840 Current Temperature: 0 Kelvin (-2[2024-05-15 08:27:32.857304] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:46.099 [2024-05-15 08:27:32.865173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:46.099 [2024-05-15 08:27:32.865205] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:46.099 [2024-05-15 08:27:32.865213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.099 [2024-05-15 08:27:32.865222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.099 [2024-05-15 08:27:32.865227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.099 [2024-05-15 08:27:32.865233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.099 [2024-05-15 08:27:32.865284] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:46.099 [2024-05-15 08:27:32.865295] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:46.099 [2024-05-15 08:27:32.866297] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:46.099 [2024-05-15 08:27:32.866343] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:46.099 [2024-05-15 08:27:32.866350] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:46.099 [2024-05-15 08:27:32.867297] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:46.099 [2024-05-15 08:27:32.867307] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:46.099 [2024-05-15 08:27:32.867354] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:46.099 [2024-05-15 08:27:32.868337] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:46.099 73 Celsius) 00:15:46.099 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:46.099 Available Spare: 0% 00:15:46.099 Available Spare Threshold: 0% 00:15:46.099 Life Percentage Used: 0% 00:15:46.099 Data Units Read: 0 00:15:46.099 Data Units Written: 0 00:15:46.099 Host Read Commands: 0 00:15:46.099 Host Write Commands: 0 00:15:46.099 Controller Busy Time: 0 minutes 00:15:46.099 Power Cycles: 0 00:15:46.099 Power On Hours: 0 hours 00:15:46.099 Unsafe Shutdowns: 0 00:15:46.099 Unrecoverable Media Errors: 0 00:15:46.099 Lifetime Error Log Entries: 0 00:15:46.099 Warning Temperature Time: 0 minutes 00:15:46.099 Critical Temperature Time: 0 minutes 00:15:46.099 00:15:46.099 Number of Queues 00:15:46.099 ================ 00:15:46.099 Number of I/O Submission Queues: 127 00:15:46.099 Number of I/O Completion Queues: 127 00:15:46.099 00:15:46.099 Active Namespaces 00:15:46.099 ================= 00:15:46.099 Namespace ID:1 00:15:46.099 Error Recovery Timeout: Unlimited 00:15:46.099 Command Set Identifier: NVM (00h) 00:15:46.099 Deallocate: Supported 00:15:46.099 Deallocated/Unwritten Error: Not Supported 00:15:46.099 Deallocated Read Value: Unknown 00:15:46.099 Deallocate in Write Zeroes: Not Supported 00:15:46.099 Deallocated Guard Field: 0xFFFF 00:15:46.099 Flush: Supported 00:15:46.099 Reservation: Supported 00:15:46.099 Namespace Sharing Capabilities: Multiple Controllers 00:15:46.099 Size (in LBAs): 131072 (0GiB) 00:15:46.099 Capacity (in LBAs): 131072 (0GiB) 00:15:46.099 Utilization (in LBAs): 131072 (0GiB) 00:15:46.099 NGUID: 09DDC79AF5C746FD83FEDC3DE2DC0923 00:15:46.099 UUID: 09ddc79a-f5c7-46fd-83fe-dc3de2dc0923 00:15:46.099 Thin Provisioning: Not Supported 00:15:46.099 Per-NS Atomic Units: Yes 00:15:46.099 Atomic Boundary Size (Normal): 0 00:15:46.099 Atomic Boundary Size (PFail): 0 00:15:46.099 Atomic Boundary Offset: 0 00:15:46.099 Maximum Single Source Range Length: 65535 00:15:46.099 Maximum Copy Length: 65535 00:15:46.099 Maximum Source Range Count: 1 00:15:46.099 NGUID/EUI64 Never Reused: No 00:15:46.099 Namespace Write Protected: No 00:15:46.099 Number of LBA Formats: 1 00:15:46.099 Current LBA Format: LBA Format #00 00:15:46.099 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:46.099 00:15:46.099 08:27:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:46.099 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.099 [2024-05-15 08:27:33.078542] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:51.370 Initializing NVMe Controllers 00:15:51.370 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:51.370 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:51.370 Initialization complete. Launching workers. 00:15:51.370 ======================================================== 00:15:51.370 Latency(us) 00:15:51.370 Device Information : IOPS MiB/s Average min max 00:15:51.370 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39946.22 156.04 3204.12 958.38 6636.17 00:15:51.370 ======================================================== 00:15:51.370 Total : 39946.22 156.04 3204.12 958.38 6636.17 00:15:51.370 00:15:51.370 [2024-05-15 08:27:38.186426] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:51.370 08:27:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:51.370 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.628 [2024-05-15 08:27:38.406132] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:56.892 Initializing NVMe Controllers 00:15:56.892 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:56.892 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:56.892 Initialization complete. Launching workers. 00:15:56.892 ======================================================== 00:15:56.893 Latency(us) 00:15:56.893 Device Information : IOPS MiB/s Average min max 00:15:56.893 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39909.07 155.89 3206.88 976.02 10275.93 00:15:56.893 ======================================================== 00:15:56.893 Total : 39909.07 155.89 3206.88 976.02 10275.93 00:15:56.893 00:15:56.893 [2024-05-15 08:27:43.424562] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:56.893 08:27:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:56.893 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.893 [2024-05-15 08:27:43.620042] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:02.160 [2024-05-15 08:27:48.764257] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:02.160 Initializing NVMe Controllers 00:16:02.160 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:02.160 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:02.160 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:02.160 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:02.160 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:02.160 Initialization complete. Launching workers. 00:16:02.160 Starting thread on core 2 00:16:02.160 Starting thread on core 3 00:16:02.160 Starting thread on core 1 00:16:02.160 08:27:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:02.160 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.160 [2024-05-15 08:27:49.039503] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:05.445 [2024-05-15 08:27:52.114378] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:05.445 Initializing NVMe Controllers 00:16:05.445 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:05.445 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:05.445 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:05.445 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:05.445 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:05.445 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:05.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:05.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:05.445 Initialization complete. Launching workers. 00:16:05.445 Starting thread on core 1 with urgent priority queue 00:16:05.445 Starting thread on core 2 with urgent priority queue 00:16:05.445 Starting thread on core 3 with urgent priority queue 00:16:05.445 Starting thread on core 0 with urgent priority queue 00:16:05.445 SPDK bdev Controller (SPDK2 ) core 0: 9354.00 IO/s 10.69 secs/100000 ios 00:16:05.445 SPDK bdev Controller (SPDK2 ) core 1: 7070.00 IO/s 14.14 secs/100000 ios 00:16:05.445 SPDK bdev Controller (SPDK2 ) core 2: 10129.33 IO/s 9.87 secs/100000 ios 00:16:05.445 SPDK bdev Controller (SPDK2 ) core 3: 7631.33 IO/s 13.10 secs/100000 ios 00:16:05.445 ======================================================== 00:16:05.445 00:16:05.445 08:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:05.445 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.445 [2024-05-15 08:27:52.382636] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:05.445 Initializing NVMe Controllers 00:16:05.445 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:05.445 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:05.445 Namespace ID: 1 size: 0GB 00:16:05.445 Initialization complete. 00:16:05.445 INFO: using host memory buffer for IO 00:16:05.445 Hello world! 00:16:05.445 [2024-05-15 08:27:52.392709] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:05.445 08:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:05.703 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.703 [2024-05-15 08:27:52.658145] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:07.078 Initializing NVMe Controllers 00:16:07.078 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:07.078 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:07.078 Initialization complete. Launching workers. 00:16:07.078 submit (in ns) avg, min, max = 7690.5, 3224.3, 4001055.7 00:16:07.078 complete (in ns) avg, min, max = 18589.0, 1817.4, 4000070.4 00:16:07.078 00:16:07.078 Submit histogram 00:16:07.078 ================ 00:16:07.078 Range in us Cumulative Count 00:16:07.078 3.214 - 3.228: 0.0062% ( 1) 00:16:07.078 3.256 - 3.270: 0.0185% ( 2) 00:16:07.078 3.270 - 3.283: 0.0246% ( 1) 00:16:07.078 3.283 - 3.297: 0.0615% ( 6) 00:16:07.078 3.297 - 3.311: 0.2338% ( 28) 00:16:07.078 3.311 - 3.325: 0.4737% ( 39) 00:16:07.078 3.325 - 3.339: 0.9228% ( 73) 00:16:07.078 3.339 - 3.353: 2.0979% ( 191) 00:16:07.078 3.353 - 3.367: 5.4141% ( 539) 00:16:07.078 3.367 - 3.381: 10.4282% ( 815) 00:16:07.078 3.381 - 3.395: 16.2114% ( 940) 00:16:07.078 3.395 - 3.409: 22.8498% ( 1079) 00:16:07.078 3.409 - 3.423: 28.8852% ( 981) 00:16:07.078 3.423 - 3.437: 34.0409% ( 838) 00:16:07.078 3.437 - 3.450: 39.2088% ( 840) 00:16:07.078 3.450 - 3.464: 44.8997% ( 925) 00:16:07.078 3.464 - 3.478: 49.1633% ( 693) 00:16:07.078 3.478 - 3.492: 52.9224% ( 611) 00:16:07.078 3.492 - 3.506: 57.5981% ( 760) 00:16:07.078 3.506 - 3.520: 64.3349% ( 1095) 00:16:07.078 3.520 - 3.534: 69.2753% ( 803) 00:16:07.078 3.534 - 3.548: 73.2066% ( 639) 00:16:07.078 3.548 - 3.562: 78.4299% ( 849) 00:16:07.078 3.562 - 3.590: 84.9637% ( 1062) 00:16:07.078 3.590 - 3.617: 86.9078% ( 316) 00:16:07.078 3.617 - 3.645: 87.7076% ( 130) 00:16:07.078 3.645 - 3.673: 89.1473% ( 234) 00:16:07.078 3.673 - 3.701: 91.0545% ( 310) 00:16:07.078 3.701 - 3.729: 92.7402% ( 274) 00:16:07.078 3.729 - 3.757: 94.3214% ( 257) 00:16:07.078 3.757 - 3.784: 95.9518% ( 265) 00:16:07.078 3.784 - 3.812: 97.4468% ( 243) 00:16:07.078 3.812 - 3.840: 98.3819% ( 152) 00:16:07.078 3.840 - 3.868: 98.9233% ( 88) 00:16:07.078 3.868 - 3.896: 99.2310% ( 50) 00:16:07.078 3.896 - 3.923: 99.4524% ( 36) 00:16:07.078 3.923 - 3.951: 99.5263% ( 12) 00:16:07.078 3.951 - 3.979: 99.5693% ( 7) 00:16:07.078 3.979 - 4.007: 99.5816% ( 2) 00:16:07.078 4.035 - 4.063: 99.5878% ( 1) 00:16:07.078 4.063 - 4.090: 99.5939% ( 1) 00:16:07.078 4.090 - 4.118: 99.6001% ( 1) 00:16:07.078 4.118 - 4.146: 99.6063% ( 1) 00:16:07.078 4.202 - 4.230: 99.6124% ( 1) 00:16:07.078 4.313 - 4.341: 99.6186% ( 1) 00:16:07.078 4.758 - 4.786: 99.6247% ( 1) 00:16:07.078 5.343 - 5.370: 99.6309% ( 1) 00:16:07.078 5.370 - 5.398: 99.6370% ( 1) 00:16:07.078 5.454 - 5.482: 99.6432% ( 1) 00:16:07.078 5.704 - 5.732: 99.6493% ( 1) 00:16:07.078 5.816 - 5.843: 99.6555% ( 1) 00:16:07.078 5.843 - 5.871: 99.6616% ( 1) 00:16:07.078 5.871 - 5.899: 99.6678% ( 1) 00:16:07.078 6.456 - 6.483: 99.6739% ( 1) 00:16:07.078 6.483 - 6.511: 99.6801% ( 1) 00:16:07.078 6.595 - 6.623: 99.6862% ( 1) 00:16:07.078 6.845 - 6.873: 99.6924% ( 1) 00:16:07.078 6.873 - 6.901: 99.6985% ( 1) 00:16:07.078 6.929 - 6.957: 99.7047% ( 1) 00:16:07.078 7.096 - 7.123: 99.7108% ( 1) 00:16:07.078 7.179 - 7.235: 99.7231% ( 2) 00:16:07.078 7.235 - 7.290: 99.7354% ( 2) 00:16:07.078 7.346 - 7.402: 99.7416% ( 1) 00:16:07.078 7.457 - 7.513: 99.7539% ( 2) 00:16:07.078 7.624 - 7.680: 99.7724% ( 3) 00:16:07.078 7.736 - 7.791: 99.7785% ( 1) 00:16:07.078 7.903 - 7.958: 99.7847% ( 1) 00:16:07.078 7.958 - 8.014: 99.7908% ( 1) 00:16:07.078 8.014 - 8.070: 99.7970% ( 1) 00:16:07.078 8.070 - 8.125: 99.8031% ( 1) 00:16:07.078 8.125 - 8.181: 99.8277% ( 4) 00:16:07.078 8.348 - 8.403: 99.8339% ( 1) 00:16:07.078 8.515 - 8.570: 99.8400% ( 1) 00:16:07.078 8.626 - 8.682: 99.8462% ( 1) 00:16:07.078 8.682 - 8.737: 99.8523% ( 1) 00:16:07.078 8.849 - 8.904: 99.8646% ( 2) 00:16:07.078 8.904 - 8.960: 99.8708% ( 1) 00:16:07.078 9.405 - 9.461: 99.8770% ( 1) 00:16:07.078 10.073 - 10.129: 99.8893% ( 2) 00:16:07.078 [2024-05-15 08:27:53.760234] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:07.078 11.687 - 11.743: 99.8954% ( 1) 00:16:07.078 3989.148 - 4017.642: 100.0000% ( 17) 00:16:07.078 00:16:07.078 Complete histogram 00:16:07.078 ================== 00:16:07.078 Range in us Cumulative Count 00:16:07.078 1.809 - 1.823: 0.0185% ( 3) 00:16:07.078 1.823 - 1.837: 0.7629% ( 121) 00:16:07.078 1.837 - 1.850: 2.3133% ( 252) 00:16:07.078 1.850 - 1.864: 3.2792% ( 157) 00:16:07.078 1.864 - 1.878: 6.0539% ( 451) 00:16:07.078 1.878 - 1.892: 48.0682% ( 6829) 00:16:07.078 1.892 - 1.906: 86.3049% ( 6215) 00:16:07.078 1.906 - 1.920: 92.1804% ( 955) 00:16:07.078 1.920 - 1.934: 96.2102% ( 655) 00:16:07.078 1.934 - 1.948: 97.4468% ( 201) 00:16:07.078 1.948 - 1.962: 97.9820% ( 87) 00:16:07.078 1.962 - 1.976: 98.6096% ( 102) 00:16:07.078 1.976 - 1.990: 99.0218% ( 67) 00:16:07.078 1.990 - 2.003: 99.1448% ( 20) 00:16:07.078 2.003 - 2.017: 99.2002% ( 9) 00:16:07.078 2.017 - 2.031: 99.2371% ( 6) 00:16:07.078 2.031 - 2.045: 99.2556% ( 3) 00:16:07.078 2.045 - 2.059: 99.2617% ( 1) 00:16:07.078 2.059 - 2.073: 99.2679% ( 1) 00:16:07.078 2.073 - 2.087: 99.2740% ( 1) 00:16:07.078 2.087 - 2.101: 99.2802% ( 1) 00:16:07.078 2.101 - 2.115: 99.2863% ( 1) 00:16:07.078 2.115 - 2.129: 99.2925% ( 1) 00:16:07.078 2.129 - 2.143: 99.2986% ( 1) 00:16:07.078 2.143 - 2.157: 99.3048% ( 1) 00:16:07.078 2.170 - 2.184: 99.3109% ( 1) 00:16:07.078 2.198 - 2.212: 99.3232% ( 2) 00:16:07.078 2.226 - 2.240: 99.3294% ( 1) 00:16:07.078 2.240 - 2.254: 99.3417% ( 2) 00:16:07.078 2.254 - 2.268: 99.3540% ( 2) 00:16:07.078 2.351 - 2.365: 99.3602% ( 1) 00:16:07.078 3.840 - 3.868: 99.3663% ( 1) 00:16:07.078 3.951 - 3.979: 99.3786% ( 2) 00:16:07.078 3.979 - 4.007: 99.3848% ( 1) 00:16:07.078 4.313 - 4.341: 99.3909% ( 1) 00:16:07.078 4.730 - 4.758: 99.3971% ( 1) 00:16:07.078 4.953 - 4.981: 99.4032% ( 1) 00:16:07.078 5.148 - 5.176: 99.4094% ( 1) 00:16:07.078 5.176 - 5.203: 99.4155% ( 1) 00:16:07.078 5.259 - 5.287: 99.4217% ( 1) 00:16:07.078 5.426 - 5.454: 99.4278% ( 1) 00:16:07.078 5.537 - 5.565: 99.4401% ( 2) 00:16:07.078 5.565 - 5.593: 99.4463% ( 1) 00:16:07.078 5.816 - 5.843: 99.4524% ( 1) 00:16:07.078 5.843 - 5.871: 99.4647% ( 2) 00:16:07.078 5.899 - 5.927: 99.4709% ( 1) 00:16:07.078 5.927 - 5.955: 99.4771% ( 1) 00:16:07.078 6.066 - 6.094: 99.4832% ( 1) 00:16:07.078 6.344 - 6.372: 99.4894% ( 1) 00:16:07.078 6.511 - 6.539: 99.4955% ( 1) 00:16:07.078 7.290 - 7.346: 99.5078% ( 2) 00:16:07.078 7.513 - 7.569: 99.5263% ( 3) 00:16:07.078 7.569 - 7.624: 99.5324% ( 1) 00:16:07.078 7.847 - 7.903: 99.5386% ( 1) 00:16:07.078 8.682 - 8.737: 99.5447% ( 1) 00:16:07.078 8.737 - 8.793: 99.5509% ( 1) 00:16:07.078 9.016 - 9.071: 99.5570% ( 1) 00:16:07.078 13.301 - 13.357: 99.5632% ( 1) 00:16:07.078 15.026 - 15.137: 99.5693% ( 1) 00:16:07.078 16.139 - 16.250: 99.5755% ( 1) 00:16:07.078 48.083 - 48.306: 99.5816% ( 1) 00:16:07.078 3575.986 - 3590.233: 99.5878% ( 1) 00:16:07.078 3875.172 - 3903.666: 99.5939% ( 1) 00:16:07.078 3989.148 - 4017.642: 100.0000% ( 66) 00:16:07.078 00:16:07.079 08:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:07.079 08:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:07.079 08:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:07.079 08:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:07.079 08:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:07.079 [ 00:16:07.079 { 00:16:07.079 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:07.079 "subtype": "Discovery", 00:16:07.079 "listen_addresses": [], 00:16:07.079 "allow_any_host": true, 00:16:07.079 "hosts": [] 00:16:07.079 }, 00:16:07.079 { 00:16:07.079 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:07.079 "subtype": "NVMe", 00:16:07.079 "listen_addresses": [ 00:16:07.079 { 00:16:07.079 "trtype": "VFIOUSER", 00:16:07.079 "adrfam": "IPv4", 00:16:07.079 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:07.079 "trsvcid": "0" 00:16:07.079 } 00:16:07.079 ], 00:16:07.079 "allow_any_host": true, 00:16:07.079 "hosts": [], 00:16:07.079 "serial_number": "SPDK1", 00:16:07.079 "model_number": "SPDK bdev Controller", 00:16:07.079 "max_namespaces": 32, 00:16:07.079 "min_cntlid": 1, 00:16:07.079 "max_cntlid": 65519, 00:16:07.079 "namespaces": [ 00:16:07.079 { 00:16:07.079 "nsid": 1, 00:16:07.079 "bdev_name": "Malloc1", 00:16:07.079 "name": "Malloc1", 00:16:07.079 "nguid": "646B7A73C27F4CFDA8F290A5657CC277", 00:16:07.079 "uuid": "646b7a73-c27f-4cfd-a8f2-90a5657cc277" 00:16:07.079 }, 00:16:07.079 { 00:16:07.079 "nsid": 2, 00:16:07.079 "bdev_name": "Malloc3", 00:16:07.079 "name": "Malloc3", 00:16:07.079 "nguid": "88400125406047FC8560A3D92497340C", 00:16:07.079 "uuid": "88400125-4060-47fc-8560-a3d92497340c" 00:16:07.079 } 00:16:07.079 ] 00:16:07.079 }, 00:16:07.079 { 00:16:07.079 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:07.079 "subtype": "NVMe", 00:16:07.079 "listen_addresses": [ 00:16:07.079 { 00:16:07.079 "trtype": "VFIOUSER", 00:16:07.079 "adrfam": "IPv4", 00:16:07.079 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:07.079 "trsvcid": "0" 00:16:07.079 } 00:16:07.079 ], 00:16:07.079 "allow_any_host": true, 00:16:07.079 "hosts": [], 00:16:07.079 "serial_number": "SPDK2", 00:16:07.079 "model_number": "SPDK bdev Controller", 00:16:07.079 "max_namespaces": 32, 00:16:07.079 "min_cntlid": 1, 00:16:07.079 "max_cntlid": 65519, 00:16:07.079 "namespaces": [ 00:16:07.079 { 00:16:07.079 "nsid": 1, 00:16:07.079 "bdev_name": "Malloc2", 00:16:07.079 "name": "Malloc2", 00:16:07.079 "nguid": "09DDC79AF5C746FD83FEDC3DE2DC0923", 00:16:07.079 "uuid": "09ddc79a-f5c7-46fd-83fe-dc3de2dc0923" 00:16:07.079 } 00:16:07.079 ] 00:16:07.079 } 00:16:07.079 ] 00:16:07.079 08:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:07.079 08:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:07.079 08:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=248190 00:16:07.079 08:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:07.079 08:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:16:07.079 08:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:07.079 08:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:16:07.079 08:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=1 00:16:07.079 08:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:16:07.079 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.079 08:27:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:07.079 08:27:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:16:07.079 08:27:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=2 00:16:07.079 08:27:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:16:07.337 [2024-05-15 08:27:54.115581] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:07.337 08:27:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:07.337 08:27:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:07.337 08:27:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:16:07.337 08:27:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:07.337 08:27:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:07.595 Malloc4 00:16:07.595 08:27:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:07.595 [2024-05-15 08:27:54.549859] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:07.595 08:27:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:07.595 Asynchronous Event Request test 00:16:07.595 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:07.595 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:07.595 Registering asynchronous event callbacks... 00:16:07.595 Starting namespace attribute notice tests for all controllers... 00:16:07.595 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:07.595 aer_cb - Changed Namespace 00:16:07.595 Cleaning up... 00:16:07.854 [ 00:16:07.854 { 00:16:07.854 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:07.854 "subtype": "Discovery", 00:16:07.854 "listen_addresses": [], 00:16:07.854 "allow_any_host": true, 00:16:07.854 "hosts": [] 00:16:07.854 }, 00:16:07.854 { 00:16:07.854 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:07.854 "subtype": "NVMe", 00:16:07.854 "listen_addresses": [ 00:16:07.854 { 00:16:07.854 "trtype": "VFIOUSER", 00:16:07.854 "adrfam": "IPv4", 00:16:07.854 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:07.854 "trsvcid": "0" 00:16:07.854 } 00:16:07.854 ], 00:16:07.854 "allow_any_host": true, 00:16:07.854 "hosts": [], 00:16:07.854 "serial_number": "SPDK1", 00:16:07.854 "model_number": "SPDK bdev Controller", 00:16:07.854 "max_namespaces": 32, 00:16:07.854 "min_cntlid": 1, 00:16:07.854 "max_cntlid": 65519, 00:16:07.854 "namespaces": [ 00:16:07.854 { 00:16:07.854 "nsid": 1, 00:16:07.854 "bdev_name": "Malloc1", 00:16:07.854 "name": "Malloc1", 00:16:07.854 "nguid": "646B7A73C27F4CFDA8F290A5657CC277", 00:16:07.854 "uuid": "646b7a73-c27f-4cfd-a8f2-90a5657cc277" 00:16:07.854 }, 00:16:07.854 { 00:16:07.854 "nsid": 2, 00:16:07.854 "bdev_name": "Malloc3", 00:16:07.854 "name": "Malloc3", 00:16:07.854 "nguid": "88400125406047FC8560A3D92497340C", 00:16:07.854 "uuid": "88400125-4060-47fc-8560-a3d92497340c" 00:16:07.854 } 00:16:07.854 ] 00:16:07.854 }, 00:16:07.854 { 00:16:07.854 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:07.854 "subtype": "NVMe", 00:16:07.854 "listen_addresses": [ 00:16:07.854 { 00:16:07.854 "trtype": "VFIOUSER", 00:16:07.854 "adrfam": "IPv4", 00:16:07.854 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:07.854 "trsvcid": "0" 00:16:07.854 } 00:16:07.854 ], 00:16:07.854 "allow_any_host": true, 00:16:07.854 "hosts": [], 00:16:07.854 "serial_number": "SPDK2", 00:16:07.854 "model_number": "SPDK bdev Controller", 00:16:07.854 "max_namespaces": 32, 00:16:07.854 "min_cntlid": 1, 00:16:07.854 "max_cntlid": 65519, 00:16:07.854 "namespaces": [ 00:16:07.854 { 00:16:07.854 "nsid": 1, 00:16:07.854 "bdev_name": "Malloc2", 00:16:07.854 "name": "Malloc2", 00:16:07.854 "nguid": "09DDC79AF5C746FD83FEDC3DE2DC0923", 00:16:07.854 "uuid": "09ddc79a-f5c7-46fd-83fe-dc3de2dc0923" 00:16:07.855 }, 00:16:07.855 { 00:16:07.855 "nsid": 2, 00:16:07.855 "bdev_name": "Malloc4", 00:16:07.855 "name": "Malloc4", 00:16:07.855 "nguid": "577845CDF5274328B905AF63E06565EE", 00:16:07.855 "uuid": "577845cd-f527-4328-b905-af63e06565ee" 00:16:07.855 } 00:16:07.855 ] 00:16:07.855 } 00:16:07.855 ] 00:16:07.855 08:27:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 248190 00:16:07.855 08:27:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:07.855 08:27:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 240333 00:16:07.855 08:27:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 240333 ']' 00:16:07.855 08:27:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 240333 00:16:07.855 08:27:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:16:07.855 08:27:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:07.855 08:27:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 240333 00:16:07.855 08:27:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:07.855 08:27:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:07.855 08:27:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 240333' 00:16:07.855 killing process with pid 240333 00:16:07.855 08:27:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 240333 00:16:07.855 [2024-05-15 08:27:54.809771] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:07.855 08:27:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 240333 00:16:08.114 08:27:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:08.114 08:27:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:08.114 08:27:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:08.114 08:27:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:08.114 08:27:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:08.114 08:27:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=248431 00:16:08.114 08:27:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 248431' 00:16:08.114 Process pid: 248431 00:16:08.114 08:27:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:08.114 08:27:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:08.114 08:27:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 248431 00:16:08.114 08:27:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 248431 ']' 00:16:08.114 08:27:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.114 08:27:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:08.114 08:27:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.114 08:27:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:08.114 08:27:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:08.373 [2024-05-15 08:27:55.146276] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:08.373 [2024-05-15 08:27:55.147124] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:16:08.373 [2024-05-15 08:27:55.147160] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.373 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.373 [2024-05-15 08:27:55.201013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:08.373 [2024-05-15 08:27:55.268949] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.373 [2024-05-15 08:27:55.268988] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.373 [2024-05-15 08:27:55.268995] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.373 [2024-05-15 08:27:55.269000] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.373 [2024-05-15 08:27:55.269005] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.373 [2024-05-15 08:27:55.269096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.373 [2024-05-15 08:27:55.269221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.373 [2024-05-15 08:27:55.269245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:08.373 [2024-05-15 08:27:55.269246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.373 [2024-05-15 08:27:55.344621] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:08.373 [2024-05-15 08:27:55.344710] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:08.373 [2024-05-15 08:27:55.344941] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:08.373 [2024-05-15 08:27:55.345269] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:08.373 [2024-05-15 08:27:55.345450] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:08.939 08:27:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:08.939 08:27:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:16:08.939 08:27:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:10.314 08:27:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:10.314 08:27:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:10.314 08:27:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:10.314 08:27:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:10.314 08:27:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:10.314 08:27:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:10.314 Malloc1 00:16:10.314 08:27:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:10.573 08:27:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:10.831 08:27:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:10.831 [2024-05-15 08:27:57.789663] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:10.831 08:27:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:10.831 08:27:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:10.831 08:27:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:11.089 Malloc2 00:16:11.089 08:27:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:11.348 08:27:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:11.348 08:27:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:11.606 08:27:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:11.606 08:27:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 248431 00:16:11.606 08:27:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 248431 ']' 00:16:11.606 08:27:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 248431 00:16:11.606 08:27:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:16:11.606 08:27:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:11.606 08:27:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 248431 00:16:11.606 08:27:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:11.606 08:27:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:11.606 08:27:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 248431' 00:16:11.606 killing process with pid 248431 00:16:11.606 08:27:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 248431 00:16:11.606 [2024-05-15 08:27:58.548815] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:11.606 08:27:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 248431 00:16:11.864 08:27:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:11.864 08:27:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:11.864 00:16:11.864 real 0m51.650s 00:16:11.864 user 3m24.536s 00:16:11.864 sys 0m3.450s 00:16:11.864 08:27:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:11.864 08:27:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:11.864 ************************************ 00:16:11.864 END TEST nvmf_vfio_user 00:16:11.864 ************************************ 00:16:11.864 08:27:58 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:11.864 08:27:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:11.864 08:27:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:11.864 08:27:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:11.864 ************************************ 00:16:11.864 START TEST nvmf_vfio_user_nvme_compliance 00:16:11.864 ************************************ 00:16:11.864 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:12.123 * Looking for test storage... 00:16:12.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.123 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=249186 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 249186' 00:16:12.124 Process pid: 249186 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 249186 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 249186 ']' 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:12.124 08:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:12.124 [2024-05-15 08:27:59.035499] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:16:12.124 [2024-05-15 08:27:59.035545] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.124 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.124 [2024-05-15 08:27:59.093421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:12.383 [2024-05-15 08:27:59.170803] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.383 [2024-05-15 08:27:59.170836] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.383 [2024-05-15 08:27:59.170843] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:12.383 [2024-05-15 08:27:59.170850] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:12.383 [2024-05-15 08:27:59.170855] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.383 [2024-05-15 08:27:59.170903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.383 [2024-05-15 08:27:59.171002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:12.383 [2024-05-15 08:27:59.171004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.950 08:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:12.950 08:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:16:12.950 08:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:13.884 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:13.884 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:13.884 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:13.884 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.884 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:13.884 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.884 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:13.884 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:13.884 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.884 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:13.884 malloc0 00:16:13.884 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.885 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:13.885 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.885 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:13.885 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.885 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:13.885 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.885 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:13.885 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.885 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:13.885 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.885 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:13.885 [2024-05-15 08:28:00.897937] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:13.885 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.885 08:28:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:14.143 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.143 00:16:14.143 00:16:14.143 CUnit - A unit testing framework for C - Version 2.1-3 00:16:14.143 http://cunit.sourceforge.net/ 00:16:14.143 00:16:14.143 00:16:14.143 Suite: nvme_compliance 00:16:14.143 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 08:28:01.044863] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.143 [2024-05-15 08:28:01.046189] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:14.143 [2024-05-15 08:28:01.046204] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:14.143 [2024-05-15 08:28:01.046210] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:14.143 [2024-05-15 08:28:01.047881] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.143 passed 00:16:14.143 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 08:28:01.125440] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.143 [2024-05-15 08:28:01.128466] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.143 passed 00:16:14.401 Test: admin_identify_ns ...[2024-05-15 08:28:01.209491] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.401 [2024-05-15 08:28:01.269180] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:14.401 [2024-05-15 08:28:01.277176] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:14.401 [2024-05-15 08:28:01.298269] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.401 passed 00:16:14.401 Test: admin_get_features_mandatory_features ...[2024-05-15 08:28:01.374564] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.401 [2024-05-15 08:28:01.377589] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.401 passed 00:16:14.659 Test: admin_get_features_optional_features ...[2024-05-15 08:28:01.458149] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.659 [2024-05-15 08:28:01.461175] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.659 passed 00:16:14.659 Test: admin_set_features_number_of_queues ...[2024-05-15 08:28:01.540098] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.659 [2024-05-15 08:28:01.646270] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.659 passed 00:16:14.917 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 08:28:01.721398] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.917 [2024-05-15 08:28:01.724426] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.917 passed 00:16:14.917 Test: admin_get_log_page_with_lpo ...[2024-05-15 08:28:01.802291] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.917 [2024-05-15 08:28:01.871173] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:14.917 [2024-05-15 08:28:01.884259] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.917 passed 00:16:15.175 Test: fabric_property_get ...[2024-05-15 08:28:01.962212] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.175 [2024-05-15 08:28:01.963448] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:15.175 [2024-05-15 08:28:01.967250] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:15.175 passed 00:16:15.175 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 08:28:02.045775] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.175 [2024-05-15 08:28:02.047006] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:15.175 [2024-05-15 08:28:02.048799] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:15.175 passed 00:16:15.175 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 08:28:02.125628] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.433 [2024-05-15 08:28:02.209177] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:15.433 [2024-05-15 08:28:02.225170] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:15.433 [2024-05-15 08:28:02.230257] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:15.433 passed 00:16:15.433 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 08:28:02.308210] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.433 [2024-05-15 08:28:02.309440] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:15.433 [2024-05-15 08:28:02.312237] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:15.433 passed 00:16:15.433 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 08:28:02.390613] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.691 [2024-05-15 08:28:02.467172] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:15.691 [2024-05-15 08:28:02.491173] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:15.691 [2024-05-15 08:28:02.496246] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:15.691 passed 00:16:15.691 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 08:28:02.569385] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.691 [2024-05-15 08:28:02.570608] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:15.691 [2024-05-15 08:28:02.570629] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:15.691 [2024-05-15 08:28:02.572408] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:15.691 passed 00:16:15.691 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 08:28:02.650336] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.949 [2024-05-15 08:28:02.743176] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:15.949 [2024-05-15 08:28:02.751180] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:15.949 [2024-05-15 08:28:02.759176] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:15.949 [2024-05-15 08:28:02.767172] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:15.949 [2024-05-15 08:28:02.796258] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:15.949 passed 00:16:15.949 Test: admin_create_io_sq_verify_pc ...[2024-05-15 08:28:02.873449] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.949 [2024-05-15 08:28:02.892179] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:15.949 [2024-05-15 08:28:02.909649] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:15.949 passed 00:16:16.207 Test: admin_create_io_qp_max_qps ...[2024-05-15 08:28:02.987182] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:17.141 [2024-05-15 08:28:04.101174] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:17.708 [2024-05-15 08:28:04.478139] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:17.708 passed 00:16:17.708 Test: admin_create_io_sq_shared_cq ...[2024-05-15 08:28:04.558605] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:17.708 [2024-05-15 08:28:04.691170] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:17.708 [2024-05-15 08:28:04.725205] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:17.967 passed 00:16:17.967 00:16:17.967 Run Summary: Type Total Ran Passed Failed Inactive 00:16:17.967 suites 1 1 n/a 0 0 00:16:17.967 tests 18 18 18 0 0 00:16:17.968 asserts 360 360 360 0 n/a 00:16:17.968 00:16:17.968 Elapsed time = 1.514 seconds 00:16:17.968 08:28:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 249186 00:16:17.968 08:28:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 249186 ']' 00:16:17.968 08:28:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 249186 00:16:17.968 08:28:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:16:17.968 08:28:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:17.968 08:28:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 249186 00:16:17.968 08:28:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:17.968 08:28:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:17.968 08:28:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 249186' 00:16:17.968 killing process with pid 249186 00:16:17.968 08:28:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 249186 00:16:17.968 [2024-05-15 08:28:04.811486] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:17.968 08:28:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 249186 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:18.227 00:16:18.227 real 0m6.172s 00:16:18.227 user 0m17.577s 00:16:18.227 sys 0m0.451s 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:18.227 ************************************ 00:16:18.227 END TEST nvmf_vfio_user_nvme_compliance 00:16:18.227 ************************************ 00:16:18.227 08:28:05 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:18.227 08:28:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:18.227 08:28:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:18.227 08:28:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:18.227 ************************************ 00:16:18.227 START TEST nvmf_vfio_user_fuzz 00:16:18.227 ************************************ 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:18.227 * Looking for test storage... 00:16:18.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=250173 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 250173' 00:16:18.227 Process pid: 250173 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 250173 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 250173 ']' 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.227 08:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:19.162 08:28:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:19.162 08:28:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:16:19.162 08:28:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:20.098 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:20.098 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.098 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:20.098 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.098 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:20.098 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:20.098 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.098 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:20.098 malloc0 00:16:20.098 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.098 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:20.098 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.098 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:20.356 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.356 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:20.356 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.356 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:20.356 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.356 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:20.356 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.356 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:20.356 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.356 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:20.356 08:28:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:52.430 Fuzzing completed. Shutting down the fuzz application 00:16:52.430 00:16:52.430 Dumping successful admin opcodes: 00:16:52.430 8, 9, 10, 24, 00:16:52.430 Dumping successful io opcodes: 00:16:52.430 0, 00:16:52.430 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1114130, total successful commands: 4384, random_seed: 2603684544 00:16:52.430 NS: 0x200003a1ef00 admin qp, Total commands completed: 273913, total successful commands: 2208, random_seed: 1571431872 00:16:52.430 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:52.430 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.430 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:52.430 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.430 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 250173 00:16:52.430 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 250173 ']' 00:16:52.430 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 250173 00:16:52.430 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:16:52.430 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:52.430 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 250173 00:16:52.430 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:52.430 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:52.430 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 250173' 00:16:52.430 killing process with pid 250173 00:16:52.430 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 250173 00:16:52.430 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 250173 00:16:52.431 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:52.431 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:52.431 00:16:52.431 real 0m32.771s 00:16:52.431 user 0m35.201s 00:16:52.431 sys 0m25.985s 00:16:52.431 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:52.431 08:28:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:52.431 ************************************ 00:16:52.431 END TEST nvmf_vfio_user_fuzz 00:16:52.431 ************************************ 00:16:52.431 08:28:37 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:52.431 08:28:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:52.431 08:28:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:52.431 08:28:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:52.431 ************************************ 00:16:52.431 START TEST nvmf_host_management 00:16:52.431 ************************************ 00:16:52.431 08:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:52.431 * Looking for test storage... 00:16:52.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:52.431 08:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:56.623 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:56.623 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:56.623 Found net devices under 0000:86:00.0: cvl_0_0 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:56.623 Found net devices under 0000:86:00.1: cvl_0_1 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:56.623 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:56.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:16:56.624 00:16:56.624 --- 10.0.0.2 ping statistics --- 00:16:56.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.624 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:56.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:16:56.624 00:16:56.624 --- 10.0.0.1 ping statistics --- 00:16:56.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.624 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=258692 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 258692 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 258692 ']' 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:56.624 08:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:56.624 [2024-05-15 08:28:43.355875] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:16:56.624 [2024-05-15 08:28:43.355915] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.624 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.624 [2024-05-15 08:28:43.413432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:56.624 [2024-05-15 08:28:43.493929] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.624 [2024-05-15 08:28:43.493965] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.624 [2024-05-15 08:28:43.493972] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.624 [2024-05-15 08:28:43.493980] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.624 [2024-05-15 08:28:43.493984] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.624 [2024-05-15 08:28:43.494083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.624 [2024-05-15 08:28:43.494189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.624 [2024-05-15 08:28:43.494295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.624 [2024-05-15 08:28:43.494296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:57.189 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:57.189 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:57.189 08:28:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:57.189 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.189 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:57.189 08:28:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.189 08:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:57.189 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.189 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:57.189 [2024-05-15 08:28:44.208004] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:57.447 Malloc0 00:16:57.447 [2024-05-15 08:28:44.267538] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:57.447 [2024-05-15 08:28:44.267767] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=258758 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 258758 /var/tmp/bdevperf.sock 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 258758 ']' 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:57.447 08:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:57.448 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:57.448 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:57.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:57.448 08:28:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:57.448 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:57.448 08:28:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:57.448 08:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:57.448 08:28:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:57.448 08:28:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:57.448 { 00:16:57.448 "params": { 00:16:57.448 "name": "Nvme$subsystem", 00:16:57.448 "trtype": "$TEST_TRANSPORT", 00:16:57.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:57.448 "adrfam": "ipv4", 00:16:57.448 "trsvcid": "$NVMF_PORT", 00:16:57.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:57.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:57.448 "hdgst": ${hdgst:-false}, 00:16:57.448 "ddgst": ${ddgst:-false} 00:16:57.448 }, 00:16:57.448 "method": "bdev_nvme_attach_controller" 00:16:57.448 } 00:16:57.448 EOF 00:16:57.448 )") 00:16:57.448 08:28:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:57.448 08:28:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:57.448 08:28:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:57.448 08:28:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:57.448 "params": { 00:16:57.448 "name": "Nvme0", 00:16:57.448 "trtype": "tcp", 00:16:57.448 "traddr": "10.0.0.2", 00:16:57.448 "adrfam": "ipv4", 00:16:57.448 "trsvcid": "4420", 00:16:57.448 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:57.448 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:57.448 "hdgst": false, 00:16:57.448 "ddgst": false 00:16:57.448 }, 00:16:57.448 "method": "bdev_nvme_attach_controller" 00:16:57.448 }' 00:16:57.448 [2024-05-15 08:28:44.359102] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:16:57.448 [2024-05-15 08:28:44.359161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258758 ] 00:16:57.448 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.448 [2024-05-15 08:28:44.415073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.706 [2024-05-15 08:28:44.488215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.964 Running I/O for 10 seconds... 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.223 08:28:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:58.483 [2024-05-15 08:28:45.247601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.483 [2024-05-15 08:28:45.247636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.247645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.483 [2024-05-15 08:28:45.247652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.247660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.483 [2024-05-15 08:28:45.247666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.247673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.483 [2024-05-15 08:28:45.247680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.247687] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1627840 is same with the state(5) to be set 00:16:58.483 [2024-05-15 08:28:45.248434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.483 [2024-05-15 08:28:45.248992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.483 [2024-05-15 08:28:45.248999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.484 [2024-05-15 08:28:45.249469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.484 [2024-05-15 08:28:45.249532] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a38620 was disconnected and freed. reset controller. 00:16:58.484 [2024-05-15 08:28:45.250430] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:58.484 08:28:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.484 task offset: 122880 on job bdev=Nvme0n1 fails 00:16:58.484 00:16:58.484 Latency(us) 00:16:58.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.484 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:58.484 Job: Nvme0n1 ended in about 0.49 seconds with error 00:16:58.484 Verification LBA range: start 0x0 length 0x400 00:16:58.484 Nvme0n1 : 0.49 1976.20 123.51 131.75 0.00 29625.21 1438.94 27354.16 00:16:58.484 =================================================================================================================== 00:16:58.484 Total : 1976.20 123.51 131.75 0.00 29625.21 1438.94 27354.16 00:16:58.484 [2024-05-15 08:28:45.252011] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:58.484 [2024-05-15 08:28:45.252025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1627840 (9): Bad file descriptor 00:16:58.484 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:58.484 08:28:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.484 08:28:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:58.484 08:28:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.484 08:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:58.484 [2024-05-15 08:28:45.262627] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:59.419 08:28:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 258758 00:16:59.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (258758) - No such process 00:16:59.419 08:28:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:59.419 08:28:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:59.419 08:28:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:59.419 08:28:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:59.419 08:28:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:59.419 08:28:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:59.419 08:28:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.419 08:28:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.419 { 00:16:59.419 "params": { 00:16:59.419 "name": "Nvme$subsystem", 00:16:59.419 "trtype": "$TEST_TRANSPORT", 00:16:59.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.419 "adrfam": "ipv4", 00:16:59.419 "trsvcid": "$NVMF_PORT", 00:16:59.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.419 "hdgst": ${hdgst:-false}, 00:16:59.419 "ddgst": ${ddgst:-false} 00:16:59.419 }, 00:16:59.419 "method": "bdev_nvme_attach_controller" 00:16:59.419 } 00:16:59.419 EOF 00:16:59.419 )") 00:16:59.419 08:28:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:59.419 08:28:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:59.419 08:28:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:59.419 08:28:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:59.419 "params": { 00:16:59.419 "name": "Nvme0", 00:16:59.419 "trtype": "tcp", 00:16:59.419 "traddr": "10.0.0.2", 00:16:59.419 "adrfam": "ipv4", 00:16:59.419 "trsvcid": "4420", 00:16:59.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:59.419 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:59.419 "hdgst": false, 00:16:59.419 "ddgst": false 00:16:59.419 }, 00:16:59.419 "method": "bdev_nvme_attach_controller" 00:16:59.419 }' 00:16:59.419 [2024-05-15 08:28:46.312063] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:16:59.419 [2024-05-15 08:28:46.312109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid259210 ] 00:16:59.419 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.419 [2024-05-15 08:28:46.368124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.419 [2024-05-15 08:28:46.438407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.984 Running I/O for 1 seconds... 00:17:00.916 00:17:00.916 Latency(us) 00:17:00.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.916 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:00.916 Verification LBA range: start 0x0 length 0x400 00:17:00.916 Nvme0n1 : 1.01 2023.64 126.48 0.00 0.00 31121.66 4587.52 27468.13 00:17:00.916 =================================================================================================================== 00:17:00.916 Total : 2023.64 126.48 0.00 0.00 31121.66 4587.52 27468.13 00:17:01.174 08:28:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:01.174 08:28:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:01.174 08:28:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:01.174 08:28:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:01.174 08:28:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:01.174 08:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:01.174 08:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:01.174 08:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:01.174 08:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:01.174 08:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:01.174 08:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:01.174 rmmod nvme_tcp 00:17:01.174 rmmod nvme_fabrics 00:17:01.174 rmmod nvme_keyring 00:17:01.174 08:28:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:01.175 08:28:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:01.175 08:28:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:01.175 08:28:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 258692 ']' 00:17:01.175 08:28:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 258692 00:17:01.175 08:28:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 258692 ']' 00:17:01.175 08:28:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 258692 00:17:01.175 08:28:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:17:01.175 08:28:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:01.175 08:28:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 258692 00:17:01.175 08:28:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:01.175 08:28:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:01.175 08:28:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 258692' 00:17:01.175 killing process with pid 258692 00:17:01.175 08:28:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 258692 00:17:01.175 [2024-05-15 08:28:48.097360] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:01.175 08:28:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 258692 00:17:01.433 [2024-05-15 08:28:48.303876] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:01.433 08:28:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:01.433 08:28:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:01.433 08:28:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:01.433 08:28:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:01.433 08:28:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:01.433 08:28:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.433 08:28:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.433 08:28:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.970 08:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:03.970 08:28:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:03.970 00:17:03.970 real 0m12.436s 00:17:03.970 user 0m23.402s 00:17:03.970 sys 0m4.951s 00:17:03.970 08:28:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:03.970 08:28:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:03.970 ************************************ 00:17:03.970 END TEST nvmf_host_management 00:17:03.970 ************************************ 00:17:03.970 08:28:50 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:03.970 08:28:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:03.970 08:28:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:03.970 08:28:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:03.970 ************************************ 00:17:03.970 START TEST nvmf_lvol 00:17:03.970 ************************************ 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:03.970 * Looking for test storage... 00:17:03.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:03.970 08:28:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:09.247 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:09.247 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:09.247 Found net devices under 0000:86:00.0: cvl_0_0 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:09.247 Found net devices under 0000:86:00.1: cvl_0_1 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.247 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:09.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:17:09.248 00:17:09.248 --- 10.0.0.2 ping statistics --- 00:17:09.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.248 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:09.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:17:09.248 00:17:09.248 --- 10.0.0.1 ping statistics --- 00:17:09.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.248 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=262963 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 262963 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 262963 ']' 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:09.248 08:28:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:09.248 [2024-05-15 08:28:55.969185] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:17:09.248 [2024-05-15 08:28:55.969227] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.248 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.248 [2024-05-15 08:28:56.025594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:09.248 [2024-05-15 08:28:56.104339] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.248 [2024-05-15 08:28:56.104374] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.248 [2024-05-15 08:28:56.104380] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.248 [2024-05-15 08:28:56.104387] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.248 [2024-05-15 08:28:56.104392] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.248 [2024-05-15 08:28:56.104437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.248 [2024-05-15 08:28:56.104453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.248 [2024-05-15 08:28:56.104455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.815 08:28:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:09.815 08:28:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:17:09.815 08:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:09.815 08:28:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:09.815 08:28:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:09.815 08:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.815 08:28:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:10.073 [2024-05-15 08:28:56.993171] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.073 08:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:10.331 08:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:10.331 08:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:10.589 08:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:10.589 08:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:10.589 08:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:10.848 08:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3b257424-12dc-4f59-ae9b-094a93107903 00:17:10.848 08:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3b257424-12dc-4f59-ae9b-094a93107903 lvol 20 00:17:11.106 08:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=44b4a5cf-32d2-4e06-b70b-4557579f4026 00:17:11.106 08:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:11.106 08:28:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 44b4a5cf-32d2-4e06-b70b-4557579f4026 00:17:11.364 08:28:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:11.623 [2024-05-15 08:28:58.480186] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:11.623 [2024-05-15 08:28:58.480449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.623 08:28:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:11.881 08:28:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=263452 00:17:11.881 08:28:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:11.881 08:28:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:11.881 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.816 08:28:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 44b4a5cf-32d2-4e06-b70b-4557579f4026 MY_SNAPSHOT 00:17:13.074 08:28:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c4ee97dc-1157-4187-bd23-7cd26f6cced8 00:17:13.074 08:28:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 44b4a5cf-32d2-4e06-b70b-4557579f4026 30 00:17:13.332 08:29:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c4ee97dc-1157-4187-bd23-7cd26f6cced8 MY_CLONE 00:17:13.591 08:29:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5350d27b-e8fd-4670-a0ba-dd1b8d383bf7 00:17:13.591 08:29:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5350d27b-e8fd-4670-a0ba-dd1b8d383bf7 00:17:14.158 08:29:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 263452 00:17:22.271 Initializing NVMe Controllers 00:17:22.272 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:22.272 Controller IO queue size 128, less than required. 00:17:22.272 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:22.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:22.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:22.272 Initialization complete. Launching workers. 00:17:22.272 ======================================================== 00:17:22.272 Latency(us) 00:17:22.272 Device Information : IOPS MiB/s Average min max 00:17:22.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11612.70 45.36 11027.55 1775.99 58867.12 00:17:22.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11722.30 45.79 10919.04 1269.99 61059.25 00:17:22.272 ======================================================== 00:17:22.272 Total : 23335.00 91.15 10973.04 1269.99 61059.25 00:17:22.272 00:17:22.272 08:29:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:22.529 08:29:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 44b4a5cf-32d2-4e06-b70b-4557579f4026 00:17:22.529 08:29:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3b257424-12dc-4f59-ae9b-094a93107903 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:22.788 rmmod nvme_tcp 00:17:22.788 rmmod nvme_fabrics 00:17:22.788 rmmod nvme_keyring 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 262963 ']' 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 262963 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 262963 ']' 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 262963 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:22.788 08:29:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 262963 00:17:23.048 08:29:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:23.048 08:29:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:23.048 08:29:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 262963' 00:17:23.048 killing process with pid 262963 00:17:23.048 08:29:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 262963 00:17:23.048 [2024-05-15 08:29:09.815378] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:23.048 08:29:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 262963 00:17:23.048 08:29:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:23.048 08:29:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:23.048 08:29:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:23.048 08:29:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:23.048 08:29:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:23.048 08:29:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.048 08:29:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.048 08:29:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.590 08:29:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:25.590 00:17:25.590 real 0m21.660s 00:17:25.590 user 1m4.535s 00:17:25.590 sys 0m6.589s 00:17:25.590 08:29:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:25.590 08:29:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:25.590 ************************************ 00:17:25.590 END TEST nvmf_lvol 00:17:25.590 ************************************ 00:17:25.590 08:29:12 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:25.590 08:29:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:25.590 08:29:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:25.590 08:29:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:25.590 ************************************ 00:17:25.590 START TEST nvmf_lvs_grow 00:17:25.590 ************************************ 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:25.591 * Looking for test storage... 00:17:25.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:25.591 08:29:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:30.866 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.866 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:30.867 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:30.867 Found net devices under 0000:86:00.0: cvl_0_0 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:30.867 Found net devices under 0000:86:00.1: cvl_0_1 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:30.867 08:29:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:30.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:17:30.867 00:17:30.867 --- 10.0.0.2 ping statistics --- 00:17:30.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.867 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:17:30.867 00:17:30.867 --- 10.0.0.1 ping statistics --- 00:17:30.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.867 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=268594 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 268594 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 268594 ']' 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:30.867 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:30.867 [2024-05-15 08:29:17.196950] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:17:30.867 [2024-05-15 08:29:17.196990] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.867 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.867 [2024-05-15 08:29:17.253279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.867 [2024-05-15 08:29:17.331702] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.867 [2024-05-15 08:29:17.331738] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.867 [2024-05-15 08:29:17.331745] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.867 [2024-05-15 08:29:17.331751] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.867 [2024-05-15 08:29:17.331756] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.867 [2024-05-15 08:29:17.331772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.126 08:29:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:31.126 08:29:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:17:31.126 08:29:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:31.126 08:29:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:31.126 08:29:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:31.126 08:29:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.126 08:29:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:31.383 [2024-05-15 08:29:18.171361] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.383 08:29:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:31.383 08:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:31.383 08:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:31.383 08:29:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:31.383 ************************************ 00:17:31.384 START TEST lvs_grow_clean 00:17:31.384 ************************************ 00:17:31.384 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:17:31.384 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:31.384 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:31.384 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:31.384 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:31.384 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:31.384 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:31.384 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:31.384 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:31.384 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:31.642 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:31.642 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:31.642 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8e50c99f-434e-45d7-8b39-a30f6c21982c 00:17:31.642 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e50c99f-434e-45d7-8b39-a30f6c21982c 00:17:31.642 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:31.901 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:31.901 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:31.901 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8e50c99f-434e-45d7-8b39-a30f6c21982c lvol 150 00:17:32.159 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=067bdec1-585d-4c50-bb88-5f4c3a624ba3 00:17:32.159 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:32.159 08:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:32.159 [2024-05-15 08:29:19.113483] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:32.159 [2024-05-15 08:29:19.113532] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:32.159 true 00:17:32.159 08:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:32.159 08:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e50c99f-434e-45d7-8b39-a30f6c21982c 00:17:32.417 08:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:32.417 08:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:32.675 08:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 067bdec1-585d-4c50-bb88-5f4c3a624ba3 00:17:32.675 08:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:32.933 [2024-05-15 08:29:19.791344] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:32.934 [2024-05-15 08:29:19.791586] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.934 08:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:33.192 08:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=269096 00:17:33.192 08:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:33.192 08:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 269096 /var/tmp/bdevperf.sock 00:17:33.192 08:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:33.192 08:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 269096 ']' 00:17:33.192 08:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:33.192 08:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:33.192 08:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:33.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:33.192 08:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:33.192 08:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:33.192 [2024-05-15 08:29:19.999462] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:17:33.192 [2024-05-15 08:29:19.999508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid269096 ] 00:17:33.192 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.192 [2024-05-15 08:29:20.054506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.192 [2024-05-15 08:29:20.131174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.125 08:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:34.125 08:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:17:34.125 08:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:34.125 Nvme0n1 00:17:34.125 08:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:34.384 [ 00:17:34.384 { 00:17:34.384 "name": "Nvme0n1", 00:17:34.384 "aliases": [ 00:17:34.384 "067bdec1-585d-4c50-bb88-5f4c3a624ba3" 00:17:34.384 ], 00:17:34.384 "product_name": "NVMe disk", 00:17:34.384 "block_size": 4096, 00:17:34.384 "num_blocks": 38912, 00:17:34.384 "uuid": "067bdec1-585d-4c50-bb88-5f4c3a624ba3", 00:17:34.384 "assigned_rate_limits": { 00:17:34.384 "rw_ios_per_sec": 0, 00:17:34.384 "rw_mbytes_per_sec": 0, 00:17:34.384 "r_mbytes_per_sec": 0, 00:17:34.384 "w_mbytes_per_sec": 0 00:17:34.384 }, 00:17:34.384 "claimed": false, 00:17:34.384 "zoned": false, 00:17:34.384 "supported_io_types": { 00:17:34.384 "read": true, 00:17:34.384 "write": true, 00:17:34.384 "unmap": true, 00:17:34.384 "write_zeroes": true, 00:17:34.384 "flush": true, 00:17:34.384 "reset": true, 00:17:34.384 "compare": true, 00:17:34.384 "compare_and_write": true, 00:17:34.384 "abort": true, 00:17:34.384 "nvme_admin": true, 00:17:34.384 "nvme_io": true 00:17:34.384 }, 00:17:34.384 "memory_domains": [ 00:17:34.384 { 00:17:34.384 "dma_device_id": "system", 00:17:34.384 "dma_device_type": 1 00:17:34.384 } 00:17:34.384 ], 00:17:34.384 "driver_specific": { 00:17:34.384 "nvme": [ 00:17:34.384 { 00:17:34.384 "trid": { 00:17:34.384 "trtype": "TCP", 00:17:34.384 "adrfam": "IPv4", 00:17:34.384 "traddr": "10.0.0.2", 00:17:34.384 "trsvcid": "4420", 00:17:34.384 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:34.384 }, 00:17:34.384 "ctrlr_data": { 00:17:34.384 "cntlid": 1, 00:17:34.384 "vendor_id": "0x8086", 00:17:34.384 "model_number": "SPDK bdev Controller", 00:17:34.384 "serial_number": "SPDK0", 00:17:34.384 "firmware_revision": "24.05", 00:17:34.384 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:34.384 "oacs": { 00:17:34.384 "security": 0, 00:17:34.384 "format": 0, 00:17:34.384 "firmware": 0, 00:17:34.384 "ns_manage": 0 00:17:34.384 }, 00:17:34.384 "multi_ctrlr": true, 00:17:34.384 "ana_reporting": false 00:17:34.384 }, 00:17:34.384 "vs": { 00:17:34.384 "nvme_version": "1.3" 00:17:34.384 }, 00:17:34.384 "ns_data": { 00:17:34.384 "id": 1, 00:17:34.384 "can_share": true 00:17:34.384 } 00:17:34.384 } 00:17:34.384 ], 00:17:34.384 "mp_policy": "active_passive" 00:17:34.384 } 00:17:34.384 } 00:17:34.384 ] 00:17:34.384 08:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=269330 00:17:34.384 08:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:34.384 08:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:34.384 Running I/O for 10 seconds... 00:17:35.759 Latency(us) 00:17:35.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.759 Nvme0n1 : 1.00 23006.00 89.87 0.00 0.00 0.00 0.00 0.00 00:17:35.759 =================================================================================================================== 00:17:35.759 Total : 23006.00 89.87 0.00 0.00 0.00 0.00 0.00 00:17:35.759 00:17:36.326 08:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8e50c99f-434e-45d7-8b39-a30f6c21982c 00:17:36.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.326 Nvme0n1 : 2.00 23219.00 90.70 0.00 0.00 0.00 0.00 0.00 00:17:36.326 =================================================================================================================== 00:17:36.326 Total : 23219.00 90.70 0.00 0.00 0.00 0.00 0.00 00:17:36.326 00:17:36.584 true 00:17:36.584 08:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e50c99f-434e-45d7-8b39-a30f6c21982c 00:17:36.584 08:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:36.842 08:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:36.842 08:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:36.842 08:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 269330 00:17:37.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.409 Nvme0n1 : 3.00 23268.67 90.89 0.00 0.00 0.00 0.00 0.00 00:17:37.409 =================================================================================================================== 00:17:37.409 Total : 23268.67 90.89 0.00 0.00 0.00 0.00 0.00 00:17:37.409 00:17:38.344 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:38.344 Nvme0n1 : 4.00 23328.75 91.13 0.00 0.00 0.00 0.00 0.00 00:17:38.344 =================================================================================================================== 00:17:38.344 Total : 23328.75 91.13 0.00 0.00 0.00 0.00 0.00 00:17:38.344 00:17:39.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.721 Nvme0n1 : 5.00 23387.80 91.36 0.00 0.00 0.00 0.00 0.00 00:17:39.721 =================================================================================================================== 00:17:39.721 Total : 23387.80 91.36 0.00 0.00 0.00 0.00 0.00 00:17:39.721 00:17:40.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.657 Nvme0n1 : 6.00 23427.17 91.51 0.00 0.00 0.00 0.00 0.00 00:17:40.657 =================================================================================================================== 00:17:40.657 Total : 23427.17 91.51 0.00 0.00 0.00 0.00 0.00 00:17:40.657 00:17:41.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.592 Nvme0n1 : 7.00 23455.86 91.62 0.00 0.00 0.00 0.00 0.00 00:17:41.592 =================================================================================================================== 00:17:41.592 Total : 23455.86 91.62 0.00 0.00 0.00 0.00 0.00 00:17:41.592 00:17:42.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.527 Nvme0n1 : 8.00 23485.12 91.74 0.00 0.00 0.00 0.00 0.00 00:17:42.527 =================================================================================================================== 00:17:42.527 Total : 23485.12 91.74 0.00 0.00 0.00 0.00 0.00 00:17:42.527 00:17:43.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.462 Nvme0n1 : 9.00 23479.56 91.72 0.00 0.00 0.00 0.00 0.00 00:17:43.462 =================================================================================================================== 00:17:43.462 Total : 23479.56 91.72 0.00 0.00 0.00 0.00 0.00 00:17:43.462 00:17:44.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.397 Nvme0n1 : 10.00 23488.20 91.75 0.00 0.00 0.00 0.00 0.00 00:17:44.397 =================================================================================================================== 00:17:44.397 Total : 23488.20 91.75 0.00 0.00 0.00 0.00 0.00 00:17:44.397 00:17:44.397 00:17:44.397 Latency(us) 00:17:44.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.397 Nvme0n1 : 10.01 23489.09 91.75 0.00 0.00 5446.33 3063.10 12993.22 00:17:44.397 =================================================================================================================== 00:17:44.397 Total : 23489.09 91.75 0.00 0.00 5446.33 3063.10 12993.22 00:17:44.397 0 00:17:44.397 08:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 269096 00:17:44.397 08:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 269096 ']' 00:17:44.397 08:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 269096 00:17:44.397 08:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:44.397 08:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:44.397 08:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 269096 00:17:44.656 08:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:44.656 08:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:44.656 08:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 269096' 00:17:44.656 killing process with pid 269096 00:17:44.656 08:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 269096 00:17:44.656 Received shutdown signal, test time was about 10.000000 seconds 00:17:44.656 00:17:44.656 Latency(us) 00:17:44.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.656 =================================================================================================================== 00:17:44.656 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:44.656 08:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 269096 00:17:44.656 08:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:44.915 08:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:45.174 08:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e50c99f-434e-45d7-8b39-a30f6c21982c 00:17:45.174 08:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:45.174 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:45.174 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:45.174 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:45.433 [2024-05-15 08:29:32.303549] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:45.433 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e50c99f-434e-45d7-8b39-a30f6c21982c 00:17:45.433 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:45.433 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e50c99f-434e-45d7-8b39-a30f6c21982c 00:17:45.433 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.433 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.433 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.433 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.433 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.433 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.433 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.433 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:45.433 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e50c99f-434e-45d7-8b39-a30f6c21982c 00:17:45.692 request: 00:17:45.692 { 00:17:45.692 "uuid": "8e50c99f-434e-45d7-8b39-a30f6c21982c", 00:17:45.692 "method": "bdev_lvol_get_lvstores", 00:17:45.692 "req_id": 1 00:17:45.692 } 00:17:45.692 Got JSON-RPC error response 00:17:45.692 response: 00:17:45.692 { 00:17:45.692 "code": -19, 00:17:45.692 "message": "No such device" 00:17:45.692 } 00:17:45.692 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:45.692 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:45.692 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:45.692 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:45.692 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:45.692 aio_bdev 00:17:45.692 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 067bdec1-585d-4c50-bb88-5f4c3a624ba3 00:17:45.692 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=067bdec1-585d-4c50-bb88-5f4c3a624ba3 00:17:45.692 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:45.692 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:45.692 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:45.692 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:45.692 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:45.951 08:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 067bdec1-585d-4c50-bb88-5f4c3a624ba3 -t 2000 00:17:46.211 [ 00:17:46.211 { 00:17:46.211 "name": "067bdec1-585d-4c50-bb88-5f4c3a624ba3", 00:17:46.211 "aliases": [ 00:17:46.211 "lvs/lvol" 00:17:46.211 ], 00:17:46.211 "product_name": "Logical Volume", 00:17:46.211 "block_size": 4096, 00:17:46.211 "num_blocks": 38912, 00:17:46.211 "uuid": "067bdec1-585d-4c50-bb88-5f4c3a624ba3", 00:17:46.211 "assigned_rate_limits": { 00:17:46.211 "rw_ios_per_sec": 0, 00:17:46.211 "rw_mbytes_per_sec": 0, 00:17:46.211 "r_mbytes_per_sec": 0, 00:17:46.211 "w_mbytes_per_sec": 0 00:17:46.211 }, 00:17:46.211 "claimed": false, 00:17:46.211 "zoned": false, 00:17:46.211 "supported_io_types": { 00:17:46.211 "read": true, 00:17:46.211 "write": true, 00:17:46.211 "unmap": true, 00:17:46.211 "write_zeroes": true, 00:17:46.211 "flush": false, 00:17:46.211 "reset": true, 00:17:46.211 "compare": false, 00:17:46.211 "compare_and_write": false, 00:17:46.211 "abort": false, 00:17:46.211 "nvme_admin": false, 00:17:46.211 "nvme_io": false 00:17:46.211 }, 00:17:46.211 "driver_specific": { 00:17:46.211 "lvol": { 00:17:46.211 "lvol_store_uuid": "8e50c99f-434e-45d7-8b39-a30f6c21982c", 00:17:46.211 "base_bdev": "aio_bdev", 00:17:46.211 "thin_provision": false, 00:17:46.211 "num_allocated_clusters": 38, 00:17:46.211 "snapshot": false, 00:17:46.211 "clone": false, 00:17:46.211 "esnap_clone": false 00:17:46.211 } 00:17:46.211 } 00:17:46.211 } 00:17:46.211 ] 00:17:46.211 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:46.211 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e50c99f-434e-45d7-8b39-a30f6c21982c 00:17:46.211 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:46.211 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:46.211 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e50c99f-434e-45d7-8b39-a30f6c21982c 00:17:46.211 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:46.470 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:46.470 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 067bdec1-585d-4c50-bb88-5f4c3a624ba3 00:17:46.729 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8e50c99f-434e-45d7-8b39-a30f6c21982c 00:17:46.729 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:46.989 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:46.989 00:17:46.989 real 0m15.658s 00:17:46.989 user 0m15.404s 00:17:46.989 sys 0m1.343s 00:17:46.989 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:46.989 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:46.989 ************************************ 00:17:46.989 END TEST lvs_grow_clean 00:17:46.989 ************************************ 00:17:46.989 08:29:33 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:46.989 08:29:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:46.989 08:29:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:46.989 08:29:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:46.989 ************************************ 00:17:46.989 START TEST lvs_grow_dirty 00:17:46.989 ************************************ 00:17:46.989 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:46.989 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:46.989 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:46.989 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:46.989 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:46.989 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:46.989 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:46.989 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:46.989 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:46.989 08:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:47.248 08:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:47.248 08:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:47.508 08:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0a2b110d-4dc8-4cd3-8ed8-c8399995248f 00:17:47.508 08:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a2b110d-4dc8-4cd3-8ed8-c8399995248f 00:17:47.508 08:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:47.767 08:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:47.767 08:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:47.767 08:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0a2b110d-4dc8-4cd3-8ed8-c8399995248f lvol 150 00:17:47.767 08:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f23743b7-ff52-48ab-a923-312bb70af3aa 00:17:47.767 08:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:47.767 08:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:48.026 [2024-05-15 08:29:34.862760] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:48.026 [2024-05-15 08:29:34.862803] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:48.026 true 00:17:48.026 08:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:48.026 08:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a2b110d-4dc8-4cd3-8ed8-c8399995248f 00:17:48.285 08:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:48.285 08:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:48.285 08:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f23743b7-ff52-48ab-a923-312bb70af3aa 00:17:48.544 08:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:48.544 [2024-05-15 08:29:35.536758] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.544 08:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:48.804 08:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=271866 00:17:48.804 08:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:48.804 08:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 271866 /var/tmp/bdevperf.sock 00:17:48.804 08:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:48.804 08:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 271866 ']' 00:17:48.804 08:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.804 08:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:48.804 08:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.804 08:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:48.804 08:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:48.804 [2024-05-15 08:29:35.753316] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:17:48.804 [2024-05-15 08:29:35.753363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid271866 ] 00:17:48.804 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.804 [2024-05-15 08:29:35.805107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.062 [2024-05-15 08:29:35.885534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.626 08:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:49.626 08:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:49.626 08:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:50.191 Nvme0n1 00:17:50.191 08:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:50.191 [ 00:17:50.191 { 00:17:50.191 "name": "Nvme0n1", 00:17:50.191 "aliases": [ 00:17:50.191 "f23743b7-ff52-48ab-a923-312bb70af3aa" 00:17:50.191 ], 00:17:50.191 "product_name": "NVMe disk", 00:17:50.191 "block_size": 4096, 00:17:50.191 "num_blocks": 38912, 00:17:50.191 "uuid": "f23743b7-ff52-48ab-a923-312bb70af3aa", 00:17:50.191 "assigned_rate_limits": { 00:17:50.191 "rw_ios_per_sec": 0, 00:17:50.191 "rw_mbytes_per_sec": 0, 00:17:50.191 "r_mbytes_per_sec": 0, 00:17:50.191 "w_mbytes_per_sec": 0 00:17:50.191 }, 00:17:50.191 "claimed": false, 00:17:50.191 "zoned": false, 00:17:50.191 "supported_io_types": { 00:17:50.191 "read": true, 00:17:50.191 "write": true, 00:17:50.191 "unmap": true, 00:17:50.191 "write_zeroes": true, 00:17:50.191 "flush": true, 00:17:50.191 "reset": true, 00:17:50.191 "compare": true, 00:17:50.191 "compare_and_write": true, 00:17:50.191 "abort": true, 00:17:50.191 "nvme_admin": true, 00:17:50.191 "nvme_io": true 00:17:50.191 }, 00:17:50.191 "memory_domains": [ 00:17:50.191 { 00:17:50.191 "dma_device_id": "system", 00:17:50.191 "dma_device_type": 1 00:17:50.191 } 00:17:50.191 ], 00:17:50.191 "driver_specific": { 00:17:50.191 "nvme": [ 00:17:50.191 { 00:17:50.191 "trid": { 00:17:50.191 "trtype": "TCP", 00:17:50.191 "adrfam": "IPv4", 00:17:50.191 "traddr": "10.0.0.2", 00:17:50.191 "trsvcid": "4420", 00:17:50.191 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:50.191 }, 00:17:50.191 "ctrlr_data": { 00:17:50.191 "cntlid": 1, 00:17:50.191 "vendor_id": "0x8086", 00:17:50.191 "model_number": "SPDK bdev Controller", 00:17:50.191 "serial_number": "SPDK0", 00:17:50.191 "firmware_revision": "24.05", 00:17:50.191 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:50.191 "oacs": { 00:17:50.191 "security": 0, 00:17:50.191 "format": 0, 00:17:50.191 "firmware": 0, 00:17:50.191 "ns_manage": 0 00:17:50.191 }, 00:17:50.191 "multi_ctrlr": true, 00:17:50.191 "ana_reporting": false 00:17:50.191 }, 00:17:50.191 "vs": { 00:17:50.191 "nvme_version": "1.3" 00:17:50.191 }, 00:17:50.191 "ns_data": { 00:17:50.191 "id": 1, 00:17:50.191 "can_share": true 00:17:50.191 } 00:17:50.191 } 00:17:50.191 ], 00:17:50.191 "mp_policy": "active_passive" 00:17:50.191 } 00:17:50.191 } 00:17:50.191 ] 00:17:50.191 08:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=272096 00:17:50.191 08:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:50.191 08:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:50.191 Running I/O for 10 seconds... 00:17:51.564 Latency(us) 00:17:51.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.564 Nvme0n1 : 1.00 23127.00 90.34 0.00 0.00 0.00 0.00 0.00 00:17:51.564 =================================================================================================================== 00:17:51.564 Total : 23127.00 90.34 0.00 0.00 0.00 0.00 0.00 00:17:51.564 00:17:52.130 08:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0a2b110d-4dc8-4cd3-8ed8-c8399995248f 00:17:52.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.388 Nvme0n1 : 2.00 23314.50 91.07 0.00 0.00 0.00 0.00 0.00 00:17:52.388 =================================================================================================================== 00:17:52.388 Total : 23314.50 91.07 0.00 0.00 0.00 0.00 0.00 00:17:52.388 00:17:52.388 true 00:17:52.388 08:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a2b110d-4dc8-4cd3-8ed8-c8399995248f 00:17:52.388 08:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:52.647 08:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:52.647 08:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:52.647 08:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 272096 00:17:53.213 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.213 Nvme0n1 : 3.00 23294.00 90.99 0.00 0.00 0.00 0.00 0.00 00:17:53.213 =================================================================================================================== 00:17:53.213 Total : 23294.00 90.99 0.00 0.00 0.00 0.00 0.00 00:17:53.213 00:17:54.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.590 Nvme0n1 : 4.00 23307.00 91.04 0.00 0.00 0.00 0.00 0.00 00:17:54.590 =================================================================================================================== 00:17:54.590 Total : 23307.00 91.04 0.00 0.00 0.00 0.00 0.00 00:17:54.590 00:17:55.525 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:55.525 Nvme0n1 : 5.00 23383.00 91.34 0.00 0.00 0.00 0.00 0.00 00:17:55.525 =================================================================================================================== 00:17:55.525 Total : 23383.00 91.34 0.00 0.00 0.00 0.00 0.00 00:17:55.525 00:17:56.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.515 Nvme0n1 : 6.00 23359.33 91.25 0.00 0.00 0.00 0.00 0.00 00:17:56.516 =================================================================================================================== 00:17:56.516 Total : 23359.33 91.25 0.00 0.00 0.00 0.00 0.00 00:17:56.516 00:17:57.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.469 Nvme0n1 : 7.00 23401.71 91.41 0.00 0.00 0.00 0.00 0.00 00:17:57.469 =================================================================================================================== 00:17:57.469 Total : 23401.71 91.41 0.00 0.00 0.00 0.00 0.00 00:17:57.469 00:17:58.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:58.484 Nvme0n1 : 8.00 23441.75 91.57 0.00 0.00 0.00 0.00 0.00 00:17:58.484 =================================================================================================================== 00:17:58.484 Total : 23441.75 91.57 0.00 0.00 0.00 0.00 0.00 00:17:58.484 00:17:59.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.483 Nvme0n1 : 9.00 23453.78 91.62 0.00 0.00 0.00 0.00 0.00 00:17:59.483 =================================================================================================================== 00:17:59.483 Total : 23453.78 91.62 0.00 0.00 0.00 0.00 0.00 00:17:59.483 00:18:00.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.448 Nvme0n1 : 10.00 23478.00 91.71 0.00 0.00 0.00 0.00 0.00 00:18:00.448 =================================================================================================================== 00:18:00.448 Total : 23478.00 91.71 0.00 0.00 0.00 0.00 0.00 00:18:00.448 00:18:00.448 00:18:00.448 Latency(us) 00:18:00.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.448 Nvme0n1 : 10.01 23478.42 91.71 0.00 0.00 5447.85 2179.78 16070.57 00:18:00.448 =================================================================================================================== 00:18:00.448 Total : 23478.42 91.71 0.00 0.00 5447.85 2179.78 16070.57 00:18:00.448 0 00:18:00.448 08:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 271866 00:18:00.448 08:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 271866 ']' 00:18:00.448 08:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 271866 00:18:00.448 08:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:18:00.448 08:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:00.448 08:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 271866 00:18:00.448 08:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:00.448 08:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:00.448 08:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 271866' 00:18:00.448 killing process with pid 271866 00:18:00.448 08:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 271866 00:18:00.448 Received shutdown signal, test time was about 10.000000 seconds 00:18:00.448 00:18:00.448 Latency(us) 00:18:00.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.448 =================================================================================================================== 00:18:00.448 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:00.448 08:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 271866 00:18:00.713 08:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:00.713 08:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:00.986 08:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a2b110d-4dc8-4cd3-8ed8-c8399995248f 00:18:00.986 08:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:01.258 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:01.258 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:01.258 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 268594 00:18:01.258 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 268594 00:18:01.258 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 268594 Killed "${NVMF_APP[@]}" "$@" 00:18:01.258 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:01.258 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:01.258 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:01.258 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:01.258 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:01.258 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=273824 00:18:01.258 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 273824 00:18:01.258 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:01.258 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 273824 ']' 00:18:01.258 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.258 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:01.258 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.258 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:01.258 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:01.258 [2024-05-15 08:29:48.107079] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:18:01.258 [2024-05-15 08:29:48.107124] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.258 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.258 [2024-05-15 08:29:48.164328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.258 [2024-05-15 08:29:48.242039] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.258 [2024-05-15 08:29:48.242070] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.258 [2024-05-15 08:29:48.242077] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.258 [2024-05-15 08:29:48.242083] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.258 [2024-05-15 08:29:48.242088] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.258 [2024-05-15 08:29:48.242108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.211 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:02.211 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:18:02.211 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:02.211 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.211 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:02.211 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.211 08:29:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:02.211 [2024-05-15 08:29:49.095720] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:02.211 [2024-05-15 08:29:49.095819] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:02.211 [2024-05-15 08:29:49.095846] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:02.211 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:02.211 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f23743b7-ff52-48ab-a923-312bb70af3aa 00:18:02.211 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=f23743b7-ff52-48ab-a923-312bb70af3aa 00:18:02.211 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:02.211 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:18:02.211 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:02.211 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:02.211 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:02.476 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f23743b7-ff52-48ab-a923-312bb70af3aa -t 2000 00:18:02.476 [ 00:18:02.476 { 00:18:02.476 "name": "f23743b7-ff52-48ab-a923-312bb70af3aa", 00:18:02.476 "aliases": [ 00:18:02.476 "lvs/lvol" 00:18:02.476 ], 00:18:02.476 "product_name": "Logical Volume", 00:18:02.476 "block_size": 4096, 00:18:02.476 "num_blocks": 38912, 00:18:02.476 "uuid": "f23743b7-ff52-48ab-a923-312bb70af3aa", 00:18:02.476 "assigned_rate_limits": { 00:18:02.476 "rw_ios_per_sec": 0, 00:18:02.476 "rw_mbytes_per_sec": 0, 00:18:02.476 "r_mbytes_per_sec": 0, 00:18:02.476 "w_mbytes_per_sec": 0 00:18:02.476 }, 00:18:02.476 "claimed": false, 00:18:02.476 "zoned": false, 00:18:02.476 "supported_io_types": { 00:18:02.476 "read": true, 00:18:02.476 "write": true, 00:18:02.476 "unmap": true, 00:18:02.476 "write_zeroes": true, 00:18:02.476 "flush": false, 00:18:02.476 "reset": true, 00:18:02.476 "compare": false, 00:18:02.476 "compare_and_write": false, 00:18:02.476 "abort": false, 00:18:02.476 "nvme_admin": false, 00:18:02.476 "nvme_io": false 00:18:02.476 }, 00:18:02.476 "driver_specific": { 00:18:02.476 "lvol": { 00:18:02.476 "lvol_store_uuid": "0a2b110d-4dc8-4cd3-8ed8-c8399995248f", 00:18:02.476 "base_bdev": "aio_bdev", 00:18:02.476 "thin_provision": false, 00:18:02.476 "num_allocated_clusters": 38, 00:18:02.476 "snapshot": false, 00:18:02.476 "clone": false, 00:18:02.476 "esnap_clone": false 00:18:02.476 } 00:18:02.476 } 00:18:02.476 } 00:18:02.476 ] 00:18:02.476 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:18:02.476 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a2b110d-4dc8-4cd3-8ed8-c8399995248f 00:18:02.476 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:02.744 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:02.744 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a2b110d-4dc8-4cd3-8ed8-c8399995248f 00:18:02.744 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:03.010 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:03.010 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:03.010 [2024-05-15 08:29:49.964269] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:03.010 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a2b110d-4dc8-4cd3-8ed8-c8399995248f 00:18:03.010 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:18:03.010 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a2b110d-4dc8-4cd3-8ed8-c8399995248f 00:18:03.010 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:03.010 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:03.010 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:03.010 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:03.010 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:03.010 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:03.010 08:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:03.010 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:03.010 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a2b110d-4dc8-4cd3-8ed8-c8399995248f 00:18:03.288 request: 00:18:03.288 { 00:18:03.288 "uuid": "0a2b110d-4dc8-4cd3-8ed8-c8399995248f", 00:18:03.288 "method": "bdev_lvol_get_lvstores", 00:18:03.288 "req_id": 1 00:18:03.288 } 00:18:03.288 Got JSON-RPC error response 00:18:03.288 response: 00:18:03.288 { 00:18:03.288 "code": -19, 00:18:03.288 "message": "No such device" 00:18:03.288 } 00:18:03.288 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:18:03.288 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:03.288 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:03.288 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:03.288 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:03.565 aio_bdev 00:18:03.565 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f23743b7-ff52-48ab-a923-312bb70af3aa 00:18:03.565 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=f23743b7-ff52-48ab-a923-312bb70af3aa 00:18:03.565 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:03.565 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:18:03.565 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:03.565 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:03.565 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:03.565 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f23743b7-ff52-48ab-a923-312bb70af3aa -t 2000 00:18:03.827 [ 00:18:03.827 { 00:18:03.827 "name": "f23743b7-ff52-48ab-a923-312bb70af3aa", 00:18:03.827 "aliases": [ 00:18:03.827 "lvs/lvol" 00:18:03.828 ], 00:18:03.828 "product_name": "Logical Volume", 00:18:03.828 "block_size": 4096, 00:18:03.828 "num_blocks": 38912, 00:18:03.828 "uuid": "f23743b7-ff52-48ab-a923-312bb70af3aa", 00:18:03.828 "assigned_rate_limits": { 00:18:03.828 "rw_ios_per_sec": 0, 00:18:03.828 "rw_mbytes_per_sec": 0, 00:18:03.828 "r_mbytes_per_sec": 0, 00:18:03.828 "w_mbytes_per_sec": 0 00:18:03.828 }, 00:18:03.828 "claimed": false, 00:18:03.828 "zoned": false, 00:18:03.828 "supported_io_types": { 00:18:03.828 "read": true, 00:18:03.828 "write": true, 00:18:03.828 "unmap": true, 00:18:03.828 "write_zeroes": true, 00:18:03.828 "flush": false, 00:18:03.828 "reset": true, 00:18:03.828 "compare": false, 00:18:03.828 "compare_and_write": false, 00:18:03.828 "abort": false, 00:18:03.828 "nvme_admin": false, 00:18:03.828 "nvme_io": false 00:18:03.828 }, 00:18:03.828 "driver_specific": { 00:18:03.828 "lvol": { 00:18:03.828 "lvol_store_uuid": "0a2b110d-4dc8-4cd3-8ed8-c8399995248f", 00:18:03.828 "base_bdev": "aio_bdev", 00:18:03.828 "thin_provision": false, 00:18:03.828 "num_allocated_clusters": 38, 00:18:03.828 "snapshot": false, 00:18:03.828 "clone": false, 00:18:03.828 "esnap_clone": false 00:18:03.828 } 00:18:03.828 } 00:18:03.828 } 00:18:03.828 ] 00:18:03.828 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:18:03.828 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a2b110d-4dc8-4cd3-8ed8-c8399995248f 00:18:03.828 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:04.095 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:04.095 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a2b110d-4dc8-4cd3-8ed8-c8399995248f 00:18:04.095 08:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:04.095 08:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:04.095 08:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f23743b7-ff52-48ab-a923-312bb70af3aa 00:18:04.364 08:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0a2b110d-4dc8-4cd3-8ed8-c8399995248f 00:18:04.635 08:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:04.635 08:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:04.635 00:18:04.635 real 0m17.632s 00:18:04.635 user 0m45.300s 00:18:04.635 sys 0m3.530s 00:18:04.635 08:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:04.635 08:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:04.635 ************************************ 00:18:04.635 END TEST lvs_grow_dirty 00:18:04.635 ************************************ 00:18:04.635 08:29:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:04.635 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:18:04.635 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:18:04.635 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:18:04.635 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:04.635 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:18:04.635 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:18:04.635 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:18:04.635 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:04.635 nvmf_trace.0 00:18:04.906 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:18:04.906 08:29:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:04.906 08:29:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:04.906 08:29:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:18:04.906 08:29:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:04.906 08:29:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:18:04.907 08:29:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:04.907 08:29:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:04.907 rmmod nvme_tcp 00:18:04.907 rmmod nvme_fabrics 00:18:04.907 rmmod nvme_keyring 00:18:04.907 08:29:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:04.907 08:29:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:18:04.907 08:29:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:18:04.907 08:29:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 273824 ']' 00:18:04.907 08:29:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 273824 00:18:04.907 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 273824 ']' 00:18:04.907 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 273824 00:18:04.907 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:18:04.907 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:04.907 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 273824 00:18:04.907 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:04.907 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:04.907 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 273824' 00:18:04.907 killing process with pid 273824 00:18:04.907 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 273824 00:18:04.907 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 273824 00:18:05.175 08:29:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:05.176 08:29:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:05.176 08:29:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:05.176 08:29:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:05.176 08:29:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:05.176 08:29:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.176 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.176 08:29:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.228 08:29:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:07.228 00:18:07.228 real 0m41.846s 00:18:07.228 user 1m6.303s 00:18:07.228 sys 0m8.926s 00:18:07.228 08:29:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:07.228 08:29:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:07.228 ************************************ 00:18:07.228 END TEST nvmf_lvs_grow 00:18:07.228 ************************************ 00:18:07.228 08:29:54 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:07.228 08:29:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:07.228 08:29:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:07.228 08:29:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:07.228 ************************************ 00:18:07.228 START TEST nvmf_bdev_io_wait 00:18:07.228 ************************************ 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:07.228 * Looking for test storage... 00:18:07.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:07.228 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:18:07.229 08:29:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:12.603 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:12.603 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:12.603 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:12.604 Found net devices under 0000:86:00.0: cvl_0_0 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:12.604 Found net devices under 0000:86:00.1: cvl_0_1 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:12.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:12.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:18:12.604 00:18:12.604 --- 10.0.0.2 ping statistics --- 00:18:12.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.604 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:12.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:12.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:18:12.604 00:18:12.604 --- 10.0.0.1 ping statistics --- 00:18:12.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.604 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=278053 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 278053 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 278053 ']' 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:12.604 08:29:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:12.604 [2024-05-15 08:29:59.414410] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:18:12.604 [2024-05-15 08:29:59.414452] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.604 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.604 [2024-05-15 08:29:59.472372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:12.604 [2024-05-15 08:29:59.553551] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.604 [2024-05-15 08:29:59.553585] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.604 [2024-05-15 08:29:59.553592] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.604 [2024-05-15 08:29:59.553597] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.604 [2024-05-15 08:29:59.553602] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.604 [2024-05-15 08:29:59.553659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.604 [2024-05-15 08:29:59.553752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.604 [2024-05-15 08:29:59.553837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:12.604 [2024-05-15 08:29:59.553839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:13.602 [2024-05-15 08:30:00.336293] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:13.602 Malloc0 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:13.602 [2024-05-15 08:30:00.396419] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:13.602 [2024-05-15 08:30:00.396646] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=278143 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=278145 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:13.602 { 00:18:13.602 "params": { 00:18:13.602 "name": "Nvme$subsystem", 00:18:13.602 "trtype": "$TEST_TRANSPORT", 00:18:13.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:13.602 "adrfam": "ipv4", 00:18:13.602 "trsvcid": "$NVMF_PORT", 00:18:13.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:13.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:13.602 "hdgst": ${hdgst:-false}, 00:18:13.602 "ddgst": ${ddgst:-false} 00:18:13.602 }, 00:18:13.602 "method": "bdev_nvme_attach_controller" 00:18:13.602 } 00:18:13.602 EOF 00:18:13.602 )") 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:13.602 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=278147 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:13.603 { 00:18:13.603 "params": { 00:18:13.603 "name": "Nvme$subsystem", 00:18:13.603 "trtype": "$TEST_TRANSPORT", 00:18:13.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:13.603 "adrfam": "ipv4", 00:18:13.603 "trsvcid": "$NVMF_PORT", 00:18:13.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:13.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:13.603 "hdgst": ${hdgst:-false}, 00:18:13.603 "ddgst": ${ddgst:-false} 00:18:13.603 }, 00:18:13.603 "method": "bdev_nvme_attach_controller" 00:18:13.603 } 00:18:13.603 EOF 00:18:13.603 )") 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=278150 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:13.603 { 00:18:13.603 "params": { 00:18:13.603 "name": "Nvme$subsystem", 00:18:13.603 "trtype": "$TEST_TRANSPORT", 00:18:13.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:13.603 "adrfam": "ipv4", 00:18:13.603 "trsvcid": "$NVMF_PORT", 00:18:13.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:13.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:13.603 "hdgst": ${hdgst:-false}, 00:18:13.603 "ddgst": ${ddgst:-false} 00:18:13.603 }, 00:18:13.603 "method": "bdev_nvme_attach_controller" 00:18:13.603 } 00:18:13.603 EOF 00:18:13.603 )") 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:13.603 { 00:18:13.603 "params": { 00:18:13.603 "name": "Nvme$subsystem", 00:18:13.603 "trtype": "$TEST_TRANSPORT", 00:18:13.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:13.603 "adrfam": "ipv4", 00:18:13.603 "trsvcid": "$NVMF_PORT", 00:18:13.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:13.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:13.603 "hdgst": ${hdgst:-false}, 00:18:13.603 "ddgst": ${ddgst:-false} 00:18:13.603 }, 00:18:13.603 "method": "bdev_nvme_attach_controller" 00:18:13.603 } 00:18:13.603 EOF 00:18:13.603 )") 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 278143 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:13.603 "params": { 00:18:13.603 "name": "Nvme1", 00:18:13.603 "trtype": "tcp", 00:18:13.603 "traddr": "10.0.0.2", 00:18:13.603 "adrfam": "ipv4", 00:18:13.603 "trsvcid": "4420", 00:18:13.603 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.603 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.603 "hdgst": false, 00:18:13.603 "ddgst": false 00:18:13.603 }, 00:18:13.603 "method": "bdev_nvme_attach_controller" 00:18:13.603 }' 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:13.603 "params": { 00:18:13.603 "name": "Nvme1", 00:18:13.603 "trtype": "tcp", 00:18:13.603 "traddr": "10.0.0.2", 00:18:13.603 "adrfam": "ipv4", 00:18:13.603 "trsvcid": "4420", 00:18:13.603 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.603 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.603 "hdgst": false, 00:18:13.603 "ddgst": false 00:18:13.603 }, 00:18:13.603 "method": "bdev_nvme_attach_controller" 00:18:13.603 }' 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:13.603 "params": { 00:18:13.603 "name": "Nvme1", 00:18:13.603 "trtype": "tcp", 00:18:13.603 "traddr": "10.0.0.2", 00:18:13.603 "adrfam": "ipv4", 00:18:13.603 "trsvcid": "4420", 00:18:13.603 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.603 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.603 "hdgst": false, 00:18:13.603 "ddgst": false 00:18:13.603 }, 00:18:13.603 "method": "bdev_nvme_attach_controller" 00:18:13.603 }' 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:13.603 08:30:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:13.603 "params": { 00:18:13.603 "name": "Nvme1", 00:18:13.603 "trtype": "tcp", 00:18:13.603 "traddr": "10.0.0.2", 00:18:13.603 "adrfam": "ipv4", 00:18:13.603 "trsvcid": "4420", 00:18:13.603 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.603 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.603 "hdgst": false, 00:18:13.603 "ddgst": false 00:18:13.603 }, 00:18:13.603 "method": "bdev_nvme_attach_controller" 00:18:13.603 }' 00:18:13.603 [2024-05-15 08:30:00.447061] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:18:13.603 [2024-05-15 08:30:00.447108] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:13.603 [2024-05-15 08:30:00.447203] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:18:13.603 [2024-05-15 08:30:00.447243] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:13.603 [2024-05-15 08:30:00.448955] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:18:13.603 [2024-05-15 08:30:00.448962] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:18:13.603 [2024-05-15 08:30:00.448998] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:13.603 [2024-05-15 08:30:00.449004] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:13.603 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.603 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.895 [2024-05-15 08:30:00.638002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.895 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.895 [2024-05-15 08:30:00.713047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:13.895 [2024-05-15 08:30:00.732206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.895 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.895 [2024-05-15 08:30:00.791629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.895 [2024-05-15 08:30:00.811698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:13.895 [2024-05-15 08:30:00.852097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.895 [2024-05-15 08:30:00.869306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:14.172 [2024-05-15 08:30:00.929201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:14.172 Running I/O for 1 seconds... 00:18:14.172 Running I/O for 1 seconds... 00:18:14.172 Running I/O for 1 seconds... 00:18:14.172 Running I/O for 1 seconds... 00:18:15.135 00:18:15.135 Latency(us) 00:18:15.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.135 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:15.135 Nvme1n1 : 1.01 11950.65 46.68 0.00 0.00 10675.63 5869.75 18464.06 00:18:15.135 =================================================================================================================== 00:18:15.135 Total : 11950.65 46.68 0.00 0.00 10675.63 5869.75 18464.06 00:18:15.135 00:18:15.135 Latency(us) 00:18:15.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.135 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:15.135 Nvme1n1 : 1.00 246285.01 962.05 0.00 0.00 517.11 211.03 651.80 00:18:15.135 =================================================================================================================== 00:18:15.135 Total : 246285.01 962.05 0.00 0.00 517.11 211.03 651.80 00:18:15.135 00:18:15.135 Latency(us) 00:18:15.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.135 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:15.135 Nvme1n1 : 1.01 9547.35 37.29 0.00 0.00 13352.67 7522.39 22795.13 00:18:15.135 =================================================================================================================== 00:18:15.136 Total : 9547.35 37.29 0.00 0.00 13352.67 7522.39 22795.13 00:18:15.412 00:18:15.412 Latency(us) 00:18:15.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.412 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:15.412 Nvme1n1 : 1.00 10561.28 41.26 0.00 0.00 12085.53 4872.46 25872.47 00:18:15.412 =================================================================================================================== 00:18:15.412 Total : 10561.28 41.26 0.00 0.00 12085.53 4872.46 25872.47 00:18:15.412 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 278145 00:18:15.412 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 278147 00:18:15.412 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 278150 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:15.673 rmmod nvme_tcp 00:18:15.673 rmmod nvme_fabrics 00:18:15.673 rmmod nvme_keyring 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 278053 ']' 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 278053 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 278053 ']' 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 278053 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 278053 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 278053' 00:18:15.673 killing process with pid 278053 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 278053 00:18:15.673 [2024-05-15 08:30:02.606677] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:15.673 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 278053 00:18:15.939 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:15.939 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:15.939 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:15.939 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.939 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:15.939 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.939 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.939 08:30:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.913 08:30:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:17.913 00:18:17.913 real 0m10.742s 00:18:17.913 user 0m19.972s 00:18:17.913 sys 0m5.484s 00:18:17.913 08:30:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:17.913 08:30:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:17.913 ************************************ 00:18:17.913 END TEST nvmf_bdev_io_wait 00:18:17.913 ************************************ 00:18:17.913 08:30:04 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:17.913 08:30:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:17.914 08:30:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:17.914 08:30:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:18.182 ************************************ 00:18:18.182 START TEST nvmf_queue_depth 00:18:18.182 ************************************ 00:18:18.182 08:30:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:18.182 * Looking for test storage... 00:18:18.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:18.182 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.183 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:18.183 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:18.183 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:18.183 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.183 08:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.183 08:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.183 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:18.183 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:18.183 08:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:18.183 08:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:23.559 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:23.559 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:23.559 Found net devices under 0000:86:00.0: cvl_0_0 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:23.559 Found net devices under 0000:86:00.1: cvl_0_1 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:23.559 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:23.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:18:23.560 00:18:23.560 --- 10.0.0.2 ping statistics --- 00:18:23.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.560 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:23.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:18:23.560 00:18:23.560 --- 10.0.0.1 ping statistics --- 00:18:23.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.560 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=282550 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 282550 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 282550 ']' 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:23.560 08:30:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:23.560 [2024-05-15 08:30:10.330010] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:18:23.560 [2024-05-15 08:30:10.330055] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.560 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.560 [2024-05-15 08:30:10.387085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.560 [2024-05-15 08:30:10.466582] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.560 [2024-05-15 08:30:10.466615] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.560 [2024-05-15 08:30:10.466622] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.560 [2024-05-15 08:30:10.466628] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.560 [2024-05-15 08:30:10.466633] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.560 [2024-05-15 08:30:10.466652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.140 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:24.140 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:24.140 08:30:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:24.140 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:24.140 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:24.410 [2024-05-15 08:30:11.169960] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:24.410 Malloc0 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:24.410 [2024-05-15 08:30:11.229749] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:24.410 [2024-05-15 08:30:11.229957] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=282681 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 282681 /var/tmp/bdevperf.sock 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 282681 ']' 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:24.410 08:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:24.410 [2024-05-15 08:30:11.276542] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:18:24.410 [2024-05-15 08:30:11.276580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid282681 ] 00:18:24.410 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.410 [2024-05-15 08:30:11.330470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.410 [2024-05-15 08:30:11.411353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.375 08:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:25.375 08:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:25.375 08:30:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:25.375 08:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.375 08:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:25.375 NVMe0n1 00:18:25.375 08:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.375 08:30:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:25.375 Running I/O for 10 seconds... 00:18:35.439 00:18:35.439 Latency(us) 00:18:35.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.439 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:35.439 Verification LBA range: start 0x0 length 0x4000 00:18:35.440 NVMe0n1 : 10.05 12444.50 48.61 0.00 0.00 82008.08 7522.39 53568.56 00:18:35.440 =================================================================================================================== 00:18:35.440 Total : 12444.50 48.61 0.00 0.00 82008.08 7522.39 53568.56 00:18:35.440 0 00:18:35.440 08:30:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 282681 00:18:35.440 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 282681 ']' 00:18:35.440 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 282681 00:18:35.440 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:35.440 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:35.440 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 282681 00:18:35.440 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:35.440 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:35.440 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 282681' 00:18:35.440 killing process with pid 282681 00:18:35.440 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 282681 00:18:35.440 Received shutdown signal, test time was about 10.000000 seconds 00:18:35.440 00:18:35.440 Latency(us) 00:18:35.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.440 =================================================================================================================== 00:18:35.440 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.440 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 282681 00:18:35.699 08:30:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:35.699 08:30:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:35.699 08:30:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:35.699 08:30:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:35.699 08:30:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:35.699 08:30:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:35.699 08:30:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:35.699 08:30:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:35.699 rmmod nvme_tcp 00:18:35.699 rmmod nvme_fabrics 00:18:35.699 rmmod nvme_keyring 00:18:35.699 08:30:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:35.699 08:30:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:35.699 08:30:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:35.699 08:30:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 282550 ']' 00:18:35.699 08:30:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 282550 00:18:35.699 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 282550 ']' 00:18:35.699 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 282550 00:18:35.699 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:35.699 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:35.699 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 282550 00:18:35.958 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:35.958 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:35.958 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 282550' 00:18:35.958 killing process with pid 282550 00:18:35.958 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 282550 00:18:35.958 [2024-05-15 08:30:22.743464] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:35.958 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 282550 00:18:35.958 08:30:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:35.958 08:30:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:35.958 08:30:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:35.958 08:30:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:35.958 08:30:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:35.958 08:30:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.958 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.958 08:30:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.494 08:30:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:38.494 00:18:38.494 real 0m20.094s 00:18:38.494 user 0m25.071s 00:18:38.494 sys 0m5.277s 00:18:38.494 08:30:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:38.494 08:30:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:38.494 ************************************ 00:18:38.494 END TEST nvmf_queue_depth 00:18:38.494 ************************************ 00:18:38.494 08:30:25 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:38.494 08:30:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:38.494 08:30:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:38.494 08:30:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:38.494 ************************************ 00:18:38.494 START TEST nvmf_target_multipath 00:18:38.494 ************************************ 00:18:38.494 08:30:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:38.494 * Looking for test storage... 00:18:38.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:38.495 08:30:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:43.762 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:43.763 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:43.763 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:43.763 Found net devices under 0000:86:00.0: cvl_0_0 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:43.763 Found net devices under 0000:86:00.1: cvl_0_1 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:43.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:18:43.763 00:18:43.763 --- 10.0.0.2 ping statistics --- 00:18:43.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.763 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:43.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:18:43.763 00:18:43.763 --- 10.0.0.1 ping statistics --- 00:18:43.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.763 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:43.763 only one NIC for nvmf test 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:43.763 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:43.763 rmmod nvme_tcp 00:18:43.763 rmmod nvme_fabrics 00:18:43.763 rmmod nvme_keyring 00:18:44.022 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:44.022 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:44.022 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:44.022 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:44.022 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:44.022 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:44.022 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:44.022 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.022 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:44.022 08:30:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.022 08:30:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.022 08:30:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:45.927 00:18:45.927 real 0m7.782s 00:18:45.927 user 0m1.579s 00:18:45.927 sys 0m4.200s 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:45.927 08:30:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:45.927 ************************************ 00:18:45.927 END TEST nvmf_target_multipath 00:18:45.927 ************************************ 00:18:45.927 08:30:32 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:45.927 08:30:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:45.927 08:30:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:45.927 08:30:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:46.186 ************************************ 00:18:46.186 START TEST nvmf_zcopy 00:18:46.186 ************************************ 00:18:46.186 08:30:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:46.186 * Looking for test storage... 00:18:46.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:46.186 08:30:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:46.187 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:46.187 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.187 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:46.187 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:46.187 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:46.187 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.187 08:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.187 08:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.187 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:46.187 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:46.187 08:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:46.187 08:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:51.461 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:51.461 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:51.461 Found net devices under 0000:86:00.0: cvl_0_0 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:51.461 Found net devices under 0000:86:00.1: cvl_0_1 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:51.461 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:51.462 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:51.462 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:51.462 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:51.462 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:51.462 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:51.462 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:51.462 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:51.462 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:51.462 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:51.462 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:51.462 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:51.462 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:51.462 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:51.462 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:51.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:18:51.721 00:18:51.721 --- 10.0.0.2 ping statistics --- 00:18:51.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.721 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:51.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:18:51.721 00:18:51.721 --- 10.0.0.1 ping statistics --- 00:18:51.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.721 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=291554 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 291554 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 291554 ']' 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:51.721 08:30:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.721 [2024-05-15 08:30:38.620689] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:18:51.721 [2024-05-15 08:30:38.620730] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.721 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.721 [2024-05-15 08:30:38.676389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.980 [2024-05-15 08:30:38.748195] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.980 [2024-05-15 08:30:38.748232] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.980 [2024-05-15 08:30:38.748239] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.980 [2024-05-15 08:30:38.748245] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.980 [2024-05-15 08:30:38.748251] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.980 [2024-05-15 08:30:38.748269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.548 [2024-05-15 08:30:39.455350] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.548 [2024-05-15 08:30:39.471330] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:52.548 [2024-05-15 08:30:39.471524] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.548 malloc0 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:52.548 { 00:18:52.548 "params": { 00:18:52.548 "name": "Nvme$subsystem", 00:18:52.548 "trtype": "$TEST_TRANSPORT", 00:18:52.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.548 "adrfam": "ipv4", 00:18:52.548 "trsvcid": "$NVMF_PORT", 00:18:52.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.548 "hdgst": ${hdgst:-false}, 00:18:52.548 "ddgst": ${ddgst:-false} 00:18:52.548 }, 00:18:52.548 "method": "bdev_nvme_attach_controller" 00:18:52.548 } 00:18:52.548 EOF 00:18:52.548 )") 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:52.548 08:30:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:52.548 "params": { 00:18:52.548 "name": "Nvme1", 00:18:52.548 "trtype": "tcp", 00:18:52.548 "traddr": "10.0.0.2", 00:18:52.548 "adrfam": "ipv4", 00:18:52.548 "trsvcid": "4420", 00:18:52.548 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.548 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.548 "hdgst": false, 00:18:52.548 "ddgst": false 00:18:52.548 }, 00:18:52.548 "method": "bdev_nvme_attach_controller" 00:18:52.548 }' 00:18:52.548 [2024-05-15 08:30:39.548976] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:18:52.548 [2024-05-15 08:30:39.549019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291738 ] 00:18:52.808 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.808 [2024-05-15 08:30:39.603784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.808 [2024-05-15 08:30:39.683011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.066 Running I/O for 10 seconds... 00:19:03.038 00:19:03.038 Latency(us) 00:19:03.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.038 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:03.038 Verification LBA range: start 0x0 length 0x1000 00:19:03.038 Nvme1n1 : 10.01 8754.42 68.39 0.00 0.00 14578.21 2293.76 22225.25 00:19:03.038 =================================================================================================================== 00:19:03.038 Total : 8754.42 68.39 0.00 0.00 14578.21 2293.76 22225.25 00:19:03.298 08:30:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=293412 00:19:03.298 08:30:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:19:03.298 08:30:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:03.298 08:30:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:03.298 08:30:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:03.298 08:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:03.298 08:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:03.298 08:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:03.298 [2024-05-15 08:30:50.167910] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.298 [2024-05-15 08:30:50.167942] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.298 08:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:03.298 { 00:19:03.298 "params": { 00:19:03.298 "name": "Nvme$subsystem", 00:19:03.298 "trtype": "$TEST_TRANSPORT", 00:19:03.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:03.298 "adrfam": "ipv4", 00:19:03.298 "trsvcid": "$NVMF_PORT", 00:19:03.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:03.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:03.298 "hdgst": ${hdgst:-false}, 00:19:03.298 "ddgst": ${ddgst:-false} 00:19:03.298 }, 00:19:03.298 "method": "bdev_nvme_attach_controller" 00:19:03.298 } 00:19:03.298 EOF 00:19:03.298 )") 00:19:03.298 08:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:03.298 08:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:03.298 [2024-05-15 08:30:50.175866] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.298 [2024-05-15 08:30:50.175877] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.298 08:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:03.298 08:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:03.298 "params": { 00:19:03.298 "name": "Nvme1", 00:19:03.298 "trtype": "tcp", 00:19:03.298 "traddr": "10.0.0.2", 00:19:03.298 "adrfam": "ipv4", 00:19:03.298 "trsvcid": "4420", 00:19:03.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:03.298 "hdgst": false, 00:19:03.298 "ddgst": false 00:19:03.298 }, 00:19:03.298 "method": "bdev_nvme_attach_controller" 00:19:03.298 }' 00:19:03.298 [2024-05-15 08:30:50.183886] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.298 [2024-05-15 08:30:50.183896] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.298 [2024-05-15 08:30:50.191909] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.298 [2024-05-15 08:30:50.191918] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.298 [2024-05-15 08:30:50.199931] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.298 [2024-05-15 08:30:50.199940] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.298 [2024-05-15 08:30:50.207338] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:19:03.298 [2024-05-15 08:30:50.207375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293412 ] 00:19:03.298 [2024-05-15 08:30:50.207952] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.298 [2024-05-15 08:30:50.207961] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.298 [2024-05-15 08:30:50.215973] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.298 [2024-05-15 08:30:50.215982] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.298 [2024-05-15 08:30:50.223993] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.298 [2024-05-15 08:30:50.224006] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.298 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.298 [2024-05-15 08:30:50.232014] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.299 [2024-05-15 08:30:50.232023] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.299 [2024-05-15 08:30:50.240035] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.299 [2024-05-15 08:30:50.240043] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.299 [2024-05-15 08:30:50.248056] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.299 [2024-05-15 08:30:50.248065] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.299 [2024-05-15 08:30:50.256078] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.299 [2024-05-15 08:30:50.256087] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.299 [2024-05-15 08:30:50.261144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.299 [2024-05-15 08:30:50.264099] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.299 [2024-05-15 08:30:50.264108] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.299 [2024-05-15 08:30:50.272127] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.299 [2024-05-15 08:30:50.272141] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.299 [2024-05-15 08:30:50.280143] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.299 [2024-05-15 08:30:50.280152] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.299 [2024-05-15 08:30:50.288163] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.299 [2024-05-15 08:30:50.288176] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.299 [2024-05-15 08:30:50.296188] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.299 [2024-05-15 08:30:50.296197] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.299 [2024-05-15 08:30:50.304215] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.299 [2024-05-15 08:30:50.304231] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.299 [2024-05-15 08:30:50.312231] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.299 [2024-05-15 08:30:50.312240] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.299 [2024-05-15 08:30:50.320253] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.299 [2024-05-15 08:30:50.320262] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.328273] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.328282] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.336293] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.336301] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.339710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.559 [2024-05-15 08:30:50.344317] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.344327] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.352350] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.352367] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.360366] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.360379] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.368385] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.368401] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.376405] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.376415] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.384425] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.384435] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.392446] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.392455] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.400470] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.400481] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.408489] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.408502] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.416510] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.416519] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.424532] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.424542] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.432573] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.432595] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.440584] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.440598] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.448604] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.448619] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.456628] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.456643] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.464652] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.464666] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.472673] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.472687] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.480687] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.480696] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.488722] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.488739] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.496729] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.496740] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 Running I/O for 5 seconds... 00:19:03.559 [2024-05-15 08:30:50.504750] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.504760] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.516238] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.516257] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.525308] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.525331] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.534611] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.534630] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.543934] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.543952] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.552552] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.552570] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.561880] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.559 [2024-05-15 08:30:50.561899] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.559 [2024-05-15 08:30:50.571753] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 08:30:50.571771] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 08:30:50.580669] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 08:30:50.580687] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.590371] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.590389] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.599054] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.599072] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.607797] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.607816] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.616525] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.616544] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.625917] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.625935] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.634459] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.634477] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.643820] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.643837] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.653264] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.653282] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.662500] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.662519] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.671292] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.671310] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.680461] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.680479] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.689729] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.689746] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.698431] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.698448] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.707515] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.707533] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.716104] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.716121] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.725008] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.725027] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.733512] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.733530] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.742726] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.742743] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.752039] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.752057] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.761297] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.761315] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.770578] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.770596] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.779558] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.779575] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.788863] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.788881] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.798261] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.798279] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.807459] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.807476] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.816246] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.816264] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.825028] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.825046] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 08:30:50.833863] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.820 [2024-05-15 08:30:50.833881] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.080 [2024-05-15 08:30:50.843089] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.080 [2024-05-15 08:30:50.843107] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.080 [2024-05-15 08:30:50.852218] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.080 [2024-05-15 08:30:50.852235] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.080 [2024-05-15 08:30:50.861650] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.080 [2024-05-15 08:30:50.861667] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.080 [2024-05-15 08:30:50.871022] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.080 [2024-05-15 08:30:50.871040] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:50.880304] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:50.880322] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:50.890055] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:50.890073] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:50.899449] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:50.899466] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:50.908438] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:50.908454] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:50.917239] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:50.917256] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:50.926338] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:50.926365] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:50.935467] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:50.935484] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:50.944728] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:50.944745] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:50.953738] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:50.953755] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:50.962220] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:50.962236] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:50.970652] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:50.970668] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:50.979502] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:50.979519] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:50.988348] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:50.988365] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:50.997403] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:50.997420] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:51.006703] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:51.006720] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:51.015869] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:51.015886] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:51.025189] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:51.025206] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:51.033916] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:51.033933] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:51.043172] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:51.043188] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:51.052494] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:51.052512] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:51.061751] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:51.061768] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:51.070434] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:51.070451] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:51.079040] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:51.079057] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:51.088337] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:51.088355] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 08:30:51.097525] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 08:30:51.097542] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.106733] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.106751] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.115849] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.115866] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.124917] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.124934] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.133880] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.133897] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.142461] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.142478] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.151564] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.151581] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.160259] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.160277] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.169463] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.169480] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.178001] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.178018] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.187262] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.187279] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.196926] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.196944] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.205557] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.205574] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.214743] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.214760] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.224017] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.224034] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.233113] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.233131] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.242305] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.242322] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.251348] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.251365] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.260435] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.260452] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.269980] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.269998] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.279008] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.279025] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.288363] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.288380] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.297596] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.297613] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.306660] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.306677] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.315891] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.315908] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.324883] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.324899] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.333898] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.333915] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.343092] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 08:30:51.343109] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 08:30:51.352774] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.342 [2024-05-15 08:30:51.352791] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.342 [2024-05-15 08:30:51.362131] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.342 [2024-05-15 08:30:51.362148] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.371947] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.371964] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.381357] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.381378] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.391127] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.391144] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.400151] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.400172] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.408714] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.408731] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.417355] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.417372] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.426542] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.426559] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.435832] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.435849] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.445271] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.445288] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.453943] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.453960] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.462628] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.462646] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.471827] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.471845] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.480503] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.480520] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.489125] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.489143] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.498134] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.498151] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.507297] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.507314] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.515908] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.515925] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.525274] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.525291] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.534569] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.534587] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.543625] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.543643] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.552698] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.552718] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.562252] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.562271] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.570969] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.570987] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.580436] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.580455] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.589714] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.589733] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.599092] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.599109] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.608699] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.608716] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.602 [2024-05-15 08:30:51.617315] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.602 [2024-05-15 08:30:51.617331] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.626688] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.862 [2024-05-15 08:30:51.626708] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.636097] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.862 [2024-05-15 08:30:51.636116] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.645377] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.862 [2024-05-15 08:30:51.645394] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.654754] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.862 [2024-05-15 08:30:51.654771] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.663666] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.862 [2024-05-15 08:30:51.663683] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.672543] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.862 [2024-05-15 08:30:51.672560] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.681696] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.862 [2024-05-15 08:30:51.681713] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.690921] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.862 [2024-05-15 08:30:51.690939] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.699582] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.862 [2024-05-15 08:30:51.699599] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.708163] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.862 [2024-05-15 08:30:51.708186] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.717487] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.862 [2024-05-15 08:30:51.717504] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.727105] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.862 [2024-05-15 08:30:51.727127] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.736951] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.862 [2024-05-15 08:30:51.736968] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.745535] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.862 [2024-05-15 08:30:51.745552] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.754545] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.862 [2024-05-15 08:30:51.754562] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.763710] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.862 [2024-05-15 08:30:51.763727] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.772987] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.862 [2024-05-15 08:30:51.773004] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.782087] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.862 [2024-05-15 08:30:51.782104] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.862 [2024-05-15 08:30:51.791282] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.863 [2024-05-15 08:30:51.791299] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.863 [2024-05-15 08:30:51.799760] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.863 [2024-05-15 08:30:51.799779] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.863 [2024-05-15 08:30:51.809035] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.863 [2024-05-15 08:30:51.809054] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.863 [2024-05-15 08:30:51.818244] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.863 [2024-05-15 08:30:51.818262] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.863 [2024-05-15 08:30:51.827663] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.863 [2024-05-15 08:30:51.827682] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.863 [2024-05-15 08:30:51.837099] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.863 [2024-05-15 08:30:51.837118] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.863 [2024-05-15 08:30:51.845791] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.863 [2024-05-15 08:30:51.845810] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.863 [2024-05-15 08:30:51.854428] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.863 [2024-05-15 08:30:51.854447] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.863 [2024-05-15 08:30:51.863815] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.863 [2024-05-15 08:30:51.863832] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.863 [2024-05-15 08:30:51.872935] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.863 [2024-05-15 08:30:51.872952] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.863 [2024-05-15 08:30:51.882355] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.863 [2024-05-15 08:30:51.882372] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.124 [2024-05-15 08:30:51.892102] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.124 [2024-05-15 08:30:51.892120] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.124 [2024-05-15 08:30:51.900901] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.124 [2024-05-15 08:30:51.900922] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.124 [2024-05-15 08:30:51.910242] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.124 [2024-05-15 08:30:51.910259] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.124 [2024-05-15 08:30:51.918920] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.124 [2024-05-15 08:30:51.918938] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.124 [2024-05-15 08:30:51.928100] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.124 [2024-05-15 08:30:51.928118] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.124 [2024-05-15 08:30:51.937286] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.124 [2024-05-15 08:30:51.937304] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.124 [2024-05-15 08:30:51.946059] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.124 [2024-05-15 08:30:51.946077] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.124 [2024-05-15 08:30:51.955364] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.124 [2024-05-15 08:30:51.955382] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.124 [2024-05-15 08:30:51.964654] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.124 [2024-05-15 08:30:51.964672] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.124 [2024-05-15 08:30:51.973627] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.124 [2024-05-15 08:30:51.973644] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.124 [2024-05-15 08:30:51.982723] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.124 [2024-05-15 08:30:51.982741] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.124 [2024-05-15 08:30:51.992013] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.124 [2024-05-15 08:30:51.992031] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.124 [2024-05-15 08:30:52.001648] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.124 [2024-05-15 08:30:52.001666] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.124 [2024-05-15 08:30:52.010233] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.124 [2024-05-15 08:30:52.010251] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.125 [2024-05-15 08:30:52.018811] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.125 [2024-05-15 08:30:52.018830] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.125 [2024-05-15 08:30:52.025706] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.125 [2024-05-15 08:30:52.025723] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.125 [2024-05-15 08:30:52.036035] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.125 [2024-05-15 08:30:52.036053] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.125 [2024-05-15 08:30:52.045195] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.125 [2024-05-15 08:30:52.045213] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.125 [2024-05-15 08:30:52.054374] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.125 [2024-05-15 08:30:52.054392] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.125 [2024-05-15 08:30:52.063547] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.125 [2024-05-15 08:30:52.063565] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.125 [2024-05-15 08:30:52.072400] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.125 [2024-05-15 08:30:52.072417] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.125 [2024-05-15 08:30:52.081709] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.125 [2024-05-15 08:30:52.081727] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.125 [2024-05-15 08:30:52.090962] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.125 [2024-05-15 08:30:52.090979] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.125 [2024-05-15 08:30:52.100047] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.125 [2024-05-15 08:30:52.100064] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.125 [2024-05-15 08:30:52.108605] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.125 [2024-05-15 08:30:52.108622] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.125 [2024-05-15 08:30:52.117922] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.125 [2024-05-15 08:30:52.117940] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.125 [2024-05-15 08:30:52.126709] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.125 [2024-05-15 08:30:52.126727] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.125 [2024-05-15 08:30:52.135885] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.125 [2024-05-15 08:30:52.135902] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.125 [2024-05-15 08:30:52.145104] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.125 [2024-05-15 08:30:52.145122] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.385 [2024-05-15 08:30:52.153855] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.153873] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.163237] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.163254] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.172511] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.172529] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.181365] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.181383] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.190502] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.190520] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.199794] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.199811] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.208941] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.208959] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.218253] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.218271] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.227340] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.227357] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.236578] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.236595] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.245718] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.245735] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.254635] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.254652] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.263678] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.263695] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.272745] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.272763] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.281897] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.281915] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.290661] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.290677] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.299858] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.299875] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.308557] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.308575] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.318269] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.318286] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.327696] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.327713] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.337072] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.337089] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.345673] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.345690] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.354793] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.354810] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.363940] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.363957] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.373170] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.373187] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.381884] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.381902] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.391075] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.391092] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.386 [2024-05-15 08:30:52.400459] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.386 [2024-05-15 08:30:52.400476] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 08:30:52.409822] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 08:30:52.409840] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 08:30:52.418472] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 08:30:52.418489] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 08:30:52.427589] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 08:30:52.427606] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 08:30:52.437069] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 08:30:52.437087] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 08:30:52.446019] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 08:30:52.446037] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 08:30:52.455466] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 08:30:52.455483] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 08:30:52.464809] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.464827] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.473996] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.474014] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.482830] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.482847] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.492050] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.492067] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.501347] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.501364] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.510604] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.510621] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.519788] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.519806] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.529102] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.529119] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.537776] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.537794] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.546976] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.546993] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.556083] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.556101] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.565377] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.565394] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.574265] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.574282] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.583505] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.583523] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.592738] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.592756] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.601764] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.601781] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.610843] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.610861] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.620136] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.620153] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.629384] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.629401] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.638662] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.638680] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.647503] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.647522] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.656575] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.656593] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.647 [2024-05-15 08:30:52.665821] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.647 [2024-05-15 08:30:52.665838] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.674671] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.674688] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.683821] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.683838] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.693115] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.693133] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.702278] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.702295] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.711501] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.711518] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.720700] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.720717] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.729995] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.730012] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.738544] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.738561] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.747152] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.747175] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.756379] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.756400] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.765439] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.765456] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.774614] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.774632] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.783952] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.783969] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.793213] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.793231] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.803084] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.803102] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.811794] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.811811] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.820985] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.821002] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.830059] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.830076] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.839268] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.839285] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.847771] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.847789] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.856323] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.856339] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.865475] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.865492] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.874519] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.874536] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.883222] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.883239] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.891952] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.891969] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.901222] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.901239] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.909973] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.909990] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.908 [2024-05-15 08:30:52.918734] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.908 [2024-05-15 08:30:52.918751] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.909 [2024-05-15 08:30:52.927279] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.909 [2024-05-15 08:30:52.927300] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.169 [2024-05-15 08:30:52.936763] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.169 [2024-05-15 08:30:52.936781] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.169 [2024-05-15 08:30:52.946243] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.169 [2024-05-15 08:30:52.946260] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.169 [2024-05-15 08:30:52.954837] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.169 [2024-05-15 08:30:52.954854] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.169 [2024-05-15 08:30:52.964138] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.169 [2024-05-15 08:30:52.964155] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.169 [2024-05-15 08:30:52.972587] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.169 [2024-05-15 08:30:52.972604] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.169 [2024-05-15 08:30:52.981962] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.169 [2024-05-15 08:30:52.981979] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.169 [2024-05-15 08:30:52.991420] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.169 [2024-05-15 08:30:52.991437] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.169 [2024-05-15 08:30:53.001218] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.169 [2024-05-15 08:30:53.001235] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.169 [2024-05-15 08:30:53.009938] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.169 [2024-05-15 08:30:53.009955] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.018593] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.018611] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.027374] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.027392] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.036709] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.036726] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.046062] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.046079] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.055390] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.055407] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.064520] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.064537] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.073872] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.073889] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.082928] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.082945] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.092122] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.092138] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.100906] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.100926] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.109956] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.109973] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.118560] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.118577] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.127150] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.127172] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.135701] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.135717] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.145333] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.145350] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.154059] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.154075] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.163296] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.163313] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.172386] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.172404] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.181344] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.181362] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.170 [2024-05-15 08:30:53.191162] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.170 [2024-05-15 08:30:53.191187] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.200524] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.200542] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.209610] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.209628] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.218827] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.218847] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.228625] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.228644] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.237639] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.237658] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.247045] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.247064] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.256402] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.256419] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.265414] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.265431] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.274462] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.274488] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.283674] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.283693] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.292401] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.292421] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.300963] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.300981] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.310128] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.310146] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.319238] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.319256] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.328416] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.328435] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.337627] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.337645] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.346824] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.346841] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.355502] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.355519] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.364282] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.429 [2024-05-15 08:30:53.364300] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.429 [2024-05-15 08:30:53.372879] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.430 [2024-05-15 08:30:53.372896] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.430 [2024-05-15 08:30:53.381467] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.430 [2024-05-15 08:30:53.381485] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.430 [2024-05-15 08:30:53.390986] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.430 [2024-05-15 08:30:53.391004] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.430 [2024-05-15 08:30:53.400494] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.430 [2024-05-15 08:30:53.400512] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.430 [2024-05-15 08:30:53.409881] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.430 [2024-05-15 08:30:53.409899] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.430 [2024-05-15 08:30:53.419076] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.430 [2024-05-15 08:30:53.419093] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.430 [2024-05-15 08:30:53.428299] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.430 [2024-05-15 08:30:53.428317] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.430 [2024-05-15 08:30:53.436870] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.430 [2024-05-15 08:30:53.436888] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.430 [2024-05-15 08:30:53.446151] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.430 [2024-05-15 08:30:53.446176] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.454884] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.454903] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.464082] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.464100] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.473236] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.473255] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.482457] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.482474] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.491641] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.491659] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.500608] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.500626] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.509793] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.509811] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.518726] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.518743] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.527198] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.527215] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.536339] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.536356] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.543111] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.543128] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.553205] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.553223] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.561992] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.562009] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.571764] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.571782] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.580332] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.580349] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.589905] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.589923] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.599429] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.599447] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.607984] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.608001] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.617046] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.617064] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.626846] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.626863] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.635736] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.635753] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.644249] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.644266] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.654002] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.654019] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.663245] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.663265] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.671881] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.671899] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.680862] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.680880] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.690066] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.690083] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.699509] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.699526] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.688 [2024-05-15 08:30:53.708945] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.688 [2024-05-15 08:30:53.708963] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.718303] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.718321] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.727478] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.727496] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.736667] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.736684] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.746033] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.746051] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.756262] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.756280] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.765628] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.765645] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.775195] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.775212] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.784405] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.784422] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.793731] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.793749] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.802801] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.802818] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.811828] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.811846] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.820876] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.820894] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.830007] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.830024] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.839273] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.839291] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.848622] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.848639] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.857970] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.857987] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.866949] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.866966] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.876181] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.876198] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.885201] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.885218] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.894245] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.894262] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.904525] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.904542] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.913227] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.913244] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.922534] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.922551] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.931712] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.931729] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.940956] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.940973] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.949554] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.949571] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.958128] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.958145] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.947 [2024-05-15 08:30:53.967576] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.947 [2024-05-15 08:30:53.967593] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-05-15 08:30:53.977017] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-05-15 08:30:53.977034] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-05-15 08:30:53.985716] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-05-15 08:30:53.985734] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:53.995112] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:53.995129] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.003778] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.003795] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.012316] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.012333] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.021581] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.021598] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.030733] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.030751] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.039656] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.039673] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.048920] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.048937] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.058147] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.058171] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.067800] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.067817] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.076465] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.076483] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.085699] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.085716] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.094918] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.094935] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.103641] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.103658] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.112953] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.112971] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.122183] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.122200] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.130822] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.130843] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.139437] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.139453] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.148678] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.148696] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.158570] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.158587] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.167283] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.167300] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.176403] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.176420] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.184977] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.184994] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.194263] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.194281] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.203506] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.203523] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.212519] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.212536] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 08:30:54.221025] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 08:30:54.221041] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.230238] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.230256] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.239542] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.239560] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.253861] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.253879] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.261375] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.261403] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.270470] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.270487] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.279262] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.279279] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.287759] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.287776] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.296454] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.296471] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.305879] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.305901] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.315263] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.315280] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.324475] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.324492] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.333686] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.333704] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.343152] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.343175] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.352904] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.352921] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.362425] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.362443] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.370868] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.370886] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.380077] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.380095] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.389190] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.389207] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.398299] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.398316] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.407366] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.407383] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.416089] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.416106] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.425406] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.425423] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.434107] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.434124] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.443406] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.443424] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.452772] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.452789] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.461462] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.461479] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.470639] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.470656] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.465 [2024-05-15 08:30:54.479969] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.465 [2024-05-15 08:30:54.479991] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.489561] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.489578] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.498803] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.498820] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.507529] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.507546] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.516185] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.516202] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.525235] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.525252] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.534597] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.534613] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.543367] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.543384] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.552651] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.552668] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.561663] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.561680] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.570724] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.570741] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.580078] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.580095] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.589291] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.589310] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.598527] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.598544] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.607787] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.607804] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.616504] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.616522] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.625662] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.625680] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.634978] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.634996] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.644497] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.644515] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.653693] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.653715] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.663203] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.663222] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.672023] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.672043] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.680823] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.680842] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.689697] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.689715] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.698881] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.698898] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.708783] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.708801] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.718129] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.718147] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.723 [2024-05-15 08:30:54.727477] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.723 [2024-05-15 08:30:54.727495] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.724 [2024-05-15 08:30:54.736783] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.724 [2024-05-15 08:30:54.736803] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.724 [2024-05-15 08:30:54.746030] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.724 [2024-05-15 08:30:54.746049] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.982 [2024-05-15 08:30:54.755363] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.755380] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.765085] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.765102] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.774361] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.774378] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.783781] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.783799] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.793153] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.793177] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.802461] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.802479] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.811874] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.811893] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.821198] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.821216] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.829777] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.829796] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.839155] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.839178] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.847777] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.847794] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.854653] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.854671] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.865422] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.865440] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.874358] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.874377] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.883643] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.883661] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.892377] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.892395] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.902159] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.902186] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.911568] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.911586] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.921137] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.921155] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.930266] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.930284] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.939715] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.939732] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.948370] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.948388] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.957583] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.957600] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.966801] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.966819] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.975930] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.975948] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.984628] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.984646] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:54.993876] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:54.993894] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.983 [2024-05-15 08:30:55.002577] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.983 [2024-05-15 08:30:55.002596] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.011856] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.011873] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.021376] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.021394] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.030364] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.030381] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.039600] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.039617] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.048176] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.048193] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.056870] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.056888] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.066236] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.066254] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.075623] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.075641] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.085032] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.085049] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.094102] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.094120] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.102926] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.102942] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.111927] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.111944] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.120606] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.120623] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.129919] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.129937] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.138522] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.138539] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.147902] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.147920] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.156964] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.156981] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.165565] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.165582] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.174682] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.174699] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.183388] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.183405] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.192689] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.192706] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.202011] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.202028] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.211014] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.211031] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.219575] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.219593] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.228876] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.228893] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.237771] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.237788] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.247180] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.247214] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.255862] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.255880] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 08:30:55.265297] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 08:30:55.265315] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.274526] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.274543] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.283842] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.283859] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.292659] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.292676] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.301753] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.301770] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.310237] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.310254] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.319271] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.319289] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.328910] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.328928] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.338261] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.338279] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.347054] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.347071] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.356305] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.356322] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.365729] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.365746] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.375064] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.375081] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.384374] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.384391] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.393673] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.393690] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.402879] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.402895] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.412218] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.412235] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.420803] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.420820] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.430118] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.430136] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.440092] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.440110] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.448704] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.448722] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.457965] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.457983] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.467213] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.467231] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.476242] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.476259] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.485436] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.485453] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.494572] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.494589] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.503805] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.503822] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.512978] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.512999] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 [2024-05-15 08:30:55.519317] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.504 [2024-05-15 08:30:55.519334] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.504 00:19:08.504 Latency(us) 00:19:08.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.504 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:08.504 Nvme1n1 : 5.01 16751.08 130.87 0.00 0.00 7634.27 3276.80 14474.91 00:19:08.504 =================================================================================================================== 00:19:08.504 Total : 16751.08 130.87 0.00 0.00 7634.27 3276.80 14474.91 00:19:08.764 [2024-05-15 08:30:55.527338] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.764 [2024-05-15 08:30:55.527352] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.764 [2024-05-15 08:30:55.535357] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.764 [2024-05-15 08:30:55.535369] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.764 [2024-05-15 08:30:55.543377] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.764 [2024-05-15 08:30:55.543388] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.764 [2024-05-15 08:30:55.551407] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.764 [2024-05-15 08:30:55.551423] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.764 [2024-05-15 08:30:55.559423] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.764 [2024-05-15 08:30:55.559435] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.764 [2024-05-15 08:30:55.567444] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.764 [2024-05-15 08:30:55.567453] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.764 [2024-05-15 08:30:55.575467] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.764 [2024-05-15 08:30:55.575477] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.764 [2024-05-15 08:30:55.583486] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.764 [2024-05-15 08:30:55.583496] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.764 [2024-05-15 08:30:55.591503] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.764 [2024-05-15 08:30:55.591513] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.764 [2024-05-15 08:30:55.599525] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.764 [2024-05-15 08:30:55.599535] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.764 [2024-05-15 08:30:55.607546] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.765 [2024-05-15 08:30:55.607556] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.765 [2024-05-15 08:30:55.615568] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.765 [2024-05-15 08:30:55.615578] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.765 [2024-05-15 08:30:55.623589] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.765 [2024-05-15 08:30:55.623598] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.765 [2024-05-15 08:30:55.631610] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.765 [2024-05-15 08:30:55.631619] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.765 [2024-05-15 08:30:55.639630] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.765 [2024-05-15 08:30:55.639644] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.765 [2024-05-15 08:30:55.647652] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.765 [2024-05-15 08:30:55.647661] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.765 [2024-05-15 08:30:55.655675] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.765 [2024-05-15 08:30:55.655684] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.765 [2024-05-15 08:30:55.663698] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.765 [2024-05-15 08:30:55.663707] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.765 [2024-05-15 08:30:55.671717] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.765 [2024-05-15 08:30:55.671726] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.765 [2024-05-15 08:30:55.679737] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.765 [2024-05-15 08:30:55.679745] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.765 [2024-05-15 08:30:55.687759] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.765 [2024-05-15 08:30:55.687768] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.765 [2024-05-15 08:30:55.695785] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.765 [2024-05-15 08:30:55.695799] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.765 [2024-05-15 08:30:55.703804] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.765 [2024-05-15 08:30:55.703816] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.765 [2024-05-15 08:30:55.711823] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.765 [2024-05-15 08:30:55.711833] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.765 [2024-05-15 08:30:55.719844] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.765 [2024-05-15 08:30:55.719853] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.765 [2024-05-15 08:30:55.727867] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.765 [2024-05-15 08:30:55.727876] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (293412) - No such process 00:19:08.765 08:30:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 293412 00:19:08.765 08:30:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:08.765 08:30:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.765 08:30:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:08.765 08:30:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.765 08:30:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:08.765 08:30:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.765 08:30:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:08.765 delay0 00:19:08.765 08:30:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.765 08:30:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:08.765 08:30:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.765 08:30:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:08.765 08:30:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.765 08:30:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:09.024 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.024 [2024-05-15 08:30:55.896305] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:15.594 Initializing NVMe Controllers 00:19:15.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:15.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:15.594 Initialization complete. Launching workers. 00:19:15.594 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 152 00:19:15.594 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 435, failed to submit 37 00:19:15.594 success 284, unsuccess 151, failed 0 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:15.594 rmmod nvme_tcp 00:19:15.594 rmmod nvme_fabrics 00:19:15.594 rmmod nvme_keyring 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 291554 ']' 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 291554 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 291554 ']' 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 291554 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 291554 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 291554' 00:19:15.594 killing process with pid 291554 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 291554 00:19:15.594 [2024-05-15 08:31:02.152290] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 291554 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:15.594 08:31:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.503 08:31:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:17.503 00:19:17.503 real 0m31.457s 00:19:17.503 user 0m44.448s 00:19:17.503 sys 0m9.182s 00:19:17.503 08:31:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:17.503 08:31:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:17.503 ************************************ 00:19:17.503 END TEST nvmf_zcopy 00:19:17.503 ************************************ 00:19:17.503 08:31:04 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:17.503 08:31:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:17.503 08:31:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:17.503 08:31:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:17.503 ************************************ 00:19:17.503 START TEST nvmf_nmic 00:19:17.503 ************************************ 00:19:17.504 08:31:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:17.764 * Looking for test storage... 00:19:17.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:19:17.764 08:31:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.039 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:23.040 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:23.040 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:23.040 Found net devices under 0000:86:00.0: cvl_0_0 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:23.040 Found net devices under 0000:86:00.1: cvl_0_1 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:23.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:19:23.040 00:19:23.040 --- 10.0.0.2 ping statistics --- 00:19:23.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.040 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:23.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:19:23.040 00:19:23.040 --- 10.0.0.1 ping statistics --- 00:19:23.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.040 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=298760 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 298760 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 298760 ']' 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:23.040 08:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.040 [2024-05-15 08:31:09.967938] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:19:23.040 [2024-05-15 08:31:09.967985] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.040 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.040 [2024-05-15 08:31:10.027691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:23.300 [2024-05-15 08:31:10.118768] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.300 [2024-05-15 08:31:10.118801] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.300 [2024-05-15 08:31:10.118808] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.300 [2024-05-15 08:31:10.118815] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.300 [2024-05-15 08:31:10.118820] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.300 [2024-05-15 08:31:10.118874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.300 [2024-05-15 08:31:10.118972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.300 [2024-05-15 08:31:10.119069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:23.300 [2024-05-15 08:31:10.119071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.869 [2024-05-15 08:31:10.833154] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.869 Malloc0 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.869 [2024-05-15 08:31:10.884464] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:23.869 [2024-05-15 08:31:10.884694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:23.869 test case1: single bdev can't be used in multiple subsystems 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.869 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:24.128 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.128 08:31:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:24.128 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.128 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:24.128 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.128 08:31:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:19:24.128 08:31:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:24.128 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.128 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:24.128 [2024-05-15 08:31:10.908580] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:24.128 [2024-05-15 08:31:10.908597] subsystem.c:2031:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:24.128 [2024-05-15 08:31:10.908604] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.128 request: 00:19:24.128 { 00:19:24.128 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:24.128 "namespace": { 00:19:24.128 "bdev_name": "Malloc0", 00:19:24.128 "no_auto_visible": false 00:19:24.128 }, 00:19:24.128 "method": "nvmf_subsystem_add_ns", 00:19:24.128 "req_id": 1 00:19:24.128 } 00:19:24.128 Got JSON-RPC error response 00:19:24.128 response: 00:19:24.128 { 00:19:24.128 "code": -32602, 00:19:24.128 "message": "Invalid parameters" 00:19:24.128 } 00:19:24.128 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:24.128 08:31:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:19:24.128 08:31:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:24.128 08:31:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:24.128 Adding namespace failed - expected result. 00:19:24.128 08:31:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:24.128 test case2: host connect to nvmf target in multiple paths 00:19:24.128 08:31:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:24.128 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.128 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:24.128 [2024-05-15 08:31:10.920698] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:24.128 08:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.129 08:31:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:25.099 08:31:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:26.479 08:31:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:26.479 08:31:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:19:26.479 08:31:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:26.479 08:31:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:19:26.479 08:31:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:19:28.388 08:31:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:28.388 08:31:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:28.388 08:31:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:28.388 08:31:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:19:28.388 08:31:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:28.388 08:31:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:19:28.388 08:31:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:28.388 [global] 00:19:28.388 thread=1 00:19:28.388 invalidate=1 00:19:28.388 rw=write 00:19:28.388 time_based=1 00:19:28.388 runtime=1 00:19:28.388 ioengine=libaio 00:19:28.388 direct=1 00:19:28.388 bs=4096 00:19:28.388 iodepth=1 00:19:28.388 norandommap=0 00:19:28.388 numjobs=1 00:19:28.388 00:19:28.388 verify_dump=1 00:19:28.388 verify_backlog=512 00:19:28.388 verify_state_save=0 00:19:28.388 do_verify=1 00:19:28.388 verify=crc32c-intel 00:19:28.388 [job0] 00:19:28.388 filename=/dev/nvme0n1 00:19:28.388 Could not set queue depth (nvme0n1) 00:19:28.647 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:28.647 fio-3.35 00:19:28.647 Starting 1 thread 00:19:30.029 00:19:30.029 job0: (groupid=0, jobs=1): err= 0: pid=299840: Wed May 15 08:31:16 2024 00:19:30.029 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:19:30.029 slat (nsec): min=9869, max=24787, avg=21388.95, stdev=2731.15 00:19:30.029 clat (usec): min=40477, max=41088, avg=40953.04, stdev=116.48 00:19:30.029 lat (usec): min=40487, max=41110, avg=40974.43, stdev=118.89 00:19:30.029 clat percentiles (usec): 00:19:30.029 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:30.029 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:30.029 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:30.029 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:30.029 | 99.99th=[41157] 00:19:30.029 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:19:30.029 slat (usec): min=10, max=23576, avg=57.97, stdev=1041.44 00:19:30.029 clat (usec): min=120, max=276, avg=145.58, stdev=10.37 00:19:30.029 lat (usec): min=137, max=23793, avg=203.55, stdev=1044.64 00:19:30.029 clat percentiles (usec): 00:19:30.029 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 139], 20.00th=[ 141], 00:19:30.029 | 30.00th=[ 143], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 145], 00:19:30.029 | 70.00th=[ 147], 80.00th=[ 149], 90.00th=[ 153], 95.00th=[ 155], 00:19:30.029 | 99.00th=[ 180], 99.50th=[ 217], 99.90th=[ 277], 99.95th=[ 277], 00:19:30.029 | 99.99th=[ 277] 00:19:30.029 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:30.029 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:30.029 lat (usec) : 250=95.69%, 500=0.19% 00:19:30.029 lat (msec) : 50=4.12% 00:19:30.029 cpu : usr=0.50%, sys=0.79%, ctx=538, majf=0, minf=2 00:19:30.029 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:30.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.029 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.029 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:30.029 00:19:30.029 Run status group 0 (all jobs): 00:19:30.029 READ: bw=87.3KiB/s (89.4kB/s), 87.3KiB/s-87.3KiB/s (89.4kB/s-89.4kB/s), io=88.0KiB (90.1kB), run=1008-1008msec 00:19:30.029 WRITE: bw=2032KiB/s (2081kB/s), 2032KiB/s-2032KiB/s (2081kB/s-2081kB/s), io=2048KiB (2097kB), run=1008-1008msec 00:19:30.029 00:19:30.029 Disk stats (read/write): 00:19:30.029 nvme0n1: ios=44/512, merge=0/0, ticks=1763/72, in_queue=1835, util=98.40% 00:19:30.029 08:31:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:30.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:30.029 08:31:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:30.029 08:31:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:19:30.029 08:31:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:30.029 08:31:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:30.029 08:31:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:30.029 08:31:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:30.029 08:31:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:19:30.029 08:31:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:30.029 08:31:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:30.029 08:31:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:30.029 08:31:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:30.029 08:31:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:30.029 08:31:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:30.029 08:31:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:30.029 08:31:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:30.029 rmmod nvme_tcp 00:19:30.029 rmmod nvme_fabrics 00:19:30.029 rmmod nvme_keyring 00:19:30.029 08:31:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:30.029 08:31:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:30.029 08:31:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:30.029 08:31:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 298760 ']' 00:19:30.029 08:31:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 298760 00:19:30.029 08:31:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 298760 ']' 00:19:30.029 08:31:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 298760 00:19:30.029 08:31:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:19:30.029 08:31:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:30.029 08:31:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 298760 00:19:30.029 08:31:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:30.029 08:31:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:30.029 08:31:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 298760' 00:19:30.029 killing process with pid 298760 00:19:30.029 08:31:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 298760 00:19:30.029 [2024-05-15 08:31:17.049561] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:30.029 08:31:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 298760 00:19:30.289 08:31:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:30.289 08:31:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:30.289 08:31:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:30.289 08:31:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:30.289 08:31:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:30.289 08:31:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.289 08:31:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.289 08:31:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.828 08:31:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:32.828 00:19:32.828 real 0m14.845s 00:19:32.828 user 0m35.185s 00:19:32.828 sys 0m4.790s 00:19:32.828 08:31:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:32.828 08:31:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:32.828 ************************************ 00:19:32.828 END TEST nvmf_nmic 00:19:32.828 ************************************ 00:19:32.828 08:31:19 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:32.828 08:31:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:32.828 08:31:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:32.828 08:31:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:32.828 ************************************ 00:19:32.829 START TEST nvmf_fio_target 00:19:32.829 ************************************ 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:32.829 * Looking for test storage... 00:19:32.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:32.829 08:31:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.106 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:38.107 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:38.107 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:38.107 Found net devices under 0000:86:00.0: cvl_0_0 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:38.107 Found net devices under 0000:86:00.1: cvl_0_1 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:38.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:19:38.107 00:19:38.107 --- 10.0.0.2 ping statistics --- 00:19:38.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.107 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:38.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:19:38.107 00:19:38.107 --- 10.0.0.1 ping statistics --- 00:19:38.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.107 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=303588 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 303588 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 303588 ']' 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:38.107 08:31:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.108 [2024-05-15 08:31:24.864305] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:19:38.108 [2024-05-15 08:31:24.864351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.108 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.108 [2024-05-15 08:31:24.922076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:38.108 [2024-05-15 08:31:25.003858] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.108 [2024-05-15 08:31:25.003892] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.108 [2024-05-15 08:31:25.003899] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.108 [2024-05-15 08:31:25.003905] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.108 [2024-05-15 08:31:25.003909] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.108 [2024-05-15 08:31:25.003958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.108 [2024-05-15 08:31:25.004054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.108 [2024-05-15 08:31:25.004138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:38.108 [2024-05-15 08:31:25.004139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.676 08:31:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:38.676 08:31:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:19:38.676 08:31:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:38.676 08:31:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:38.676 08:31:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.936 08:31:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.936 08:31:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:38.936 [2024-05-15 08:31:25.874600] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.936 08:31:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:39.195 08:31:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:39.195 08:31:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:39.454 08:31:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:39.454 08:31:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:39.712 08:31:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:39.712 08:31:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:39.712 08:31:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:39.712 08:31:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:39.970 08:31:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:40.229 08:31:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:40.229 08:31:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:40.487 08:31:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:40.487 08:31:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:40.487 08:31:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:40.487 08:31:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:40.746 08:31:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:41.005 08:31:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:41.005 08:31:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:41.005 08:31:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:41.005 08:31:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:41.264 08:31:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:41.523 [2024-05-15 08:31:28.328178] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:41.523 [2024-05-15 08:31:28.328402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.523 08:31:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:41.782 08:31:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:41.782 08:31:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:43.160 08:31:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:43.160 08:31:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:19:43.160 08:31:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:43.160 08:31:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:19:43.160 08:31:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:19:43.160 08:31:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:19:45.077 08:31:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:45.077 08:31:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:45.077 08:31:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:45.077 08:31:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:19:45.077 08:31:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:45.077 08:31:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:19:45.077 08:31:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:45.077 [global] 00:19:45.077 thread=1 00:19:45.077 invalidate=1 00:19:45.077 rw=write 00:19:45.077 time_based=1 00:19:45.077 runtime=1 00:19:45.077 ioengine=libaio 00:19:45.077 direct=1 00:19:45.077 bs=4096 00:19:45.077 iodepth=1 00:19:45.077 norandommap=0 00:19:45.077 numjobs=1 00:19:45.077 00:19:45.077 verify_dump=1 00:19:45.077 verify_backlog=512 00:19:45.077 verify_state_save=0 00:19:45.077 do_verify=1 00:19:45.077 verify=crc32c-intel 00:19:45.077 [job0] 00:19:45.077 filename=/dev/nvme0n1 00:19:45.077 [job1] 00:19:45.077 filename=/dev/nvme0n2 00:19:45.077 [job2] 00:19:45.077 filename=/dev/nvme0n3 00:19:45.077 [job3] 00:19:45.077 filename=/dev/nvme0n4 00:19:45.077 Could not set queue depth (nvme0n1) 00:19:45.077 Could not set queue depth (nvme0n2) 00:19:45.077 Could not set queue depth (nvme0n3) 00:19:45.077 Could not set queue depth (nvme0n4) 00:19:45.353 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.353 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.353 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.353 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.353 fio-3.35 00:19:45.353 Starting 4 threads 00:19:46.737 00:19:46.737 job0: (groupid=0, jobs=1): err= 0: pid=304936: Wed May 15 08:31:33 2024 00:19:46.737 read: IOPS=2190, BW=8763KiB/s (8974kB/s)(8772KiB/1001msec) 00:19:46.737 slat (nsec): min=6909, max=35027, avg=8017.23, stdev=1538.34 00:19:46.737 clat (usec): min=175, max=557, avg=240.36, stdev=33.85 00:19:46.737 lat (usec): min=183, max=572, avg=248.38, stdev=34.15 00:19:46.737 clat percentiles (usec): 00:19:46.737 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 212], 00:19:46.737 | 30.00th=[ 227], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 247], 00:19:46.737 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 285], 00:19:46.737 | 99.00th=[ 363], 99.50th=[ 461], 99.90th=[ 502], 99.95th=[ 537], 00:19:46.737 | 99.99th=[ 562] 00:19:46.737 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:19:46.737 slat (nsec): min=9972, max=50099, avg=11746.22, stdev=2873.56 00:19:46.737 clat (usec): min=122, max=352, avg=160.27, stdev=22.03 00:19:46.737 lat (usec): min=132, max=398, avg=172.02, stdev=23.26 00:19:46.737 clat percentiles (usec): 00:19:46.737 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:19:46.737 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:19:46.737 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 182], 95.00th=[ 204], 00:19:46.737 | 99.00th=[ 245], 99.50th=[ 265], 99.90th=[ 310], 99.95th=[ 310], 00:19:46.737 | 99.99th=[ 355] 00:19:46.737 bw ( KiB/s): min=11088, max=11088, per=45.75%, avg=11088.00, stdev= 0.00, samples=1 00:19:46.737 iops : min= 2772, max= 2772, avg=2772.00, stdev= 0.00, samples=1 00:19:46.737 lat (usec) : 250=84.39%, 500=15.55%, 750=0.06% 00:19:46.737 cpu : usr=3.40%, sys=8.10%, ctx=4753, majf=0, minf=1 00:19:46.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:46.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.737 issued rwts: total=2193,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:46.737 job1: (groupid=0, jobs=1): err= 0: pid=304937: Wed May 15 08:31:33 2024 00:19:46.737 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:19:46.737 slat (nsec): min=12603, max=23020, avg=21228.50, stdev=2892.78 00:19:46.737 clat (usec): min=40946, max=41981, avg=41079.94, stdev=289.69 00:19:46.737 lat (usec): min=40969, max=42001, avg=41101.17, stdev=289.17 00:19:46.737 clat percentiles (usec): 00:19:46.737 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:46.737 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:46.737 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:19:46.737 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:46.737 | 99.99th=[42206] 00:19:46.737 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:19:46.737 slat (nsec): min=9673, max=60449, avg=12570.66, stdev=3620.49 00:19:46.737 clat (usec): min=132, max=319, avg=198.70, stdev=29.70 00:19:46.737 lat (usec): min=144, max=363, avg=211.27, stdev=29.94 00:19:46.737 clat percentiles (usec): 00:19:46.737 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 163], 20.00th=[ 176], 00:19:46.737 | 30.00th=[ 182], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 204], 00:19:46.737 | 70.00th=[ 217], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 245], 00:19:46.737 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 318], 99.95th=[ 318], 00:19:46.737 | 99.99th=[ 318] 00:19:46.737 bw ( KiB/s): min= 4096, max= 4096, per=16.90%, avg=4096.00, stdev= 0.00, samples=1 00:19:46.737 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:46.737 lat (usec) : 250=93.45%, 500=2.43% 00:19:46.737 lat (msec) : 50=4.12% 00:19:46.737 cpu : usr=0.49%, sys=0.49%, ctx=537, majf=0, minf=1 00:19:46.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:46.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.737 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:46.737 job2: (groupid=0, jobs=1): err= 0: pid=304938: Wed May 15 08:31:33 2024 00:19:46.737 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:19:46.737 slat (nsec): min=9917, max=24343, avg=13096.32, stdev=3589.91 00:19:46.737 clat (usec): min=40540, max=43917, avg=41146.57, stdev=663.40 00:19:46.737 lat (usec): min=40550, max=43942, avg=41159.67, stdev=665.68 00:19:46.737 clat percentiles (usec): 00:19:46.737 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:46.737 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:46.737 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:19:46.737 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:19:46.737 | 99.99th=[43779] 00:19:46.737 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:19:46.737 slat (nsec): min=10231, max=41279, avg=13282.22, stdev=2580.09 00:19:46.737 clat (usec): min=145, max=404, avg=182.49, stdev=22.68 00:19:46.737 lat (usec): min=157, max=445, avg=195.77, stdev=23.31 00:19:46.737 clat percentiles (usec): 00:19:46.737 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:19:46.737 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 184], 00:19:46.737 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 221], 00:19:46.737 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 404], 99.95th=[ 404], 00:19:46.737 | 99.99th=[ 404] 00:19:46.737 bw ( KiB/s): min= 4096, max= 4096, per=16.90%, avg=4096.00, stdev= 0.00, samples=1 00:19:46.737 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:46.737 lat (usec) : 250=94.19%, 500=1.69% 00:19:46.737 lat (msec) : 50=4.12% 00:19:46.737 cpu : usr=0.40%, sys=0.99%, ctx=536, majf=0, minf=1 00:19:46.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:46.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.737 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:46.737 job3: (groupid=0, jobs=1): err= 0: pid=304939: Wed May 15 08:31:33 2024 00:19:46.737 read: IOPS=2075, BW=8304KiB/s (8503kB/s)(8312KiB/1001msec) 00:19:46.737 slat (nsec): min=3608, max=47732, avg=7088.39, stdev=1400.69 00:19:46.737 clat (usec): min=176, max=41102, avg=247.62, stdev=897.72 00:19:46.737 lat (usec): min=184, max=41106, avg=254.70, stdev=897.66 00:19:46.737 clat percentiles (usec): 00:19:46.737 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:19:46.737 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:19:46.737 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 265], 00:19:46.737 | 99.00th=[ 306], 99.50th=[ 449], 99.90th=[ 668], 99.95th=[ 1663], 00:19:46.737 | 99.99th=[41157] 00:19:46.737 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:19:46.737 slat (nsec): min=9338, max=49549, avg=10521.89, stdev=1535.97 00:19:46.737 clat (usec): min=119, max=1454, avg=168.50, stdev=43.17 00:19:46.737 lat (usec): min=130, max=1465, avg=179.02, stdev=43.41 00:19:46.737 clat percentiles (usec): 00:19:46.737 | 1.00th=[ 129], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 151], 00:19:46.737 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:19:46.737 | 70.00th=[ 172], 80.00th=[ 182], 90.00th=[ 200], 95.00th=[ 221], 00:19:46.737 | 99.00th=[ 249], 99.50th=[ 273], 99.90th=[ 392], 99.95th=[ 1369], 00:19:46.737 | 99.99th=[ 1450] 00:19:46.737 bw ( KiB/s): min= 8792, max= 8792, per=36.28%, avg=8792.00, stdev= 0.00, samples=1 00:19:46.737 iops : min= 2198, max= 2198, avg=2198.00, stdev= 0.00, samples=1 00:19:46.737 lat (usec) : 250=92.88%, 500=6.94%, 750=0.09% 00:19:46.737 lat (msec) : 2=0.06%, 50=0.02% 00:19:46.737 cpu : usr=2.80%, sys=3.80%, ctx=4640, majf=0, minf=2 00:19:46.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:46.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.737 issued rwts: total=2078,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:46.737 00:19:46.737 Run status group 0 (all jobs): 00:19:46.737 READ: bw=16.6MiB/s (17.4MB/s), 86.8KiB/s-8763KiB/s (88.9kB/s-8974kB/s), io=16.9MiB (17.7MB), run=1001-1014msec 00:19:46.737 WRITE: bw=23.7MiB/s (24.8MB/s), 2020KiB/s-9.99MiB/s (2068kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1014msec 00:19:46.737 00:19:46.737 Disk stats (read/write): 00:19:46.737 nvme0n1: ios=1999/2048, merge=0/0, ticks=465/292, in_queue=757, util=86.77% 00:19:46.737 nvme0n2: ios=43/512, merge=0/0, ticks=1728/103, in_queue=1831, util=98.58% 00:19:46.737 nvme0n3: ios=76/512, merge=0/0, ticks=1550/88, in_queue=1638, util=98.23% 00:19:46.737 nvme0n4: ios=1834/2048, merge=0/0, ticks=1437/345, in_queue=1782, util=98.21% 00:19:46.737 08:31:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:46.737 [global] 00:19:46.737 thread=1 00:19:46.737 invalidate=1 00:19:46.737 rw=randwrite 00:19:46.737 time_based=1 00:19:46.737 runtime=1 00:19:46.737 ioengine=libaio 00:19:46.737 direct=1 00:19:46.737 bs=4096 00:19:46.737 iodepth=1 00:19:46.737 norandommap=0 00:19:46.737 numjobs=1 00:19:46.737 00:19:46.737 verify_dump=1 00:19:46.737 verify_backlog=512 00:19:46.737 verify_state_save=0 00:19:46.737 do_verify=1 00:19:46.737 verify=crc32c-intel 00:19:46.737 [job0] 00:19:46.737 filename=/dev/nvme0n1 00:19:46.737 [job1] 00:19:46.737 filename=/dev/nvme0n2 00:19:46.737 [job2] 00:19:46.737 filename=/dev/nvme0n3 00:19:46.737 [job3] 00:19:46.737 filename=/dev/nvme0n4 00:19:46.737 Could not set queue depth (nvme0n1) 00:19:46.737 Could not set queue depth (nvme0n2) 00:19:46.737 Could not set queue depth (nvme0n3) 00:19:46.737 Could not set queue depth (nvme0n4) 00:19:46.737 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:46.737 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:46.737 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:46.737 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:46.737 fio-3.35 00:19:46.737 Starting 4 threads 00:19:48.112 00:19:48.112 job0: (groupid=0, jobs=1): err= 0: pid=305324: Wed May 15 08:31:34 2024 00:19:48.112 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:19:48.112 slat (nsec): min=6025, max=16015, avg=6870.46, stdev=642.30 00:19:48.112 clat (usec): min=160, max=351, avg=211.49, stdev=22.83 00:19:48.112 lat (usec): min=168, max=357, avg=218.36, stdev=22.87 00:19:48.112 clat percentiles (usec): 00:19:48.112 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:19:48.112 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:19:48.112 | 70.00th=[ 219], 80.00th=[ 235], 90.00th=[ 249], 95.00th=[ 255], 00:19:48.112 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 289], 99.95th=[ 293], 00:19:48.112 | 99.99th=[ 351] 00:19:48.112 write: IOPS=2649, BW=10.3MiB/s (10.9MB/s)(10.4MiB/1001msec); 0 zone resets 00:19:48.112 slat (nsec): min=8521, max=37715, avg=9576.41, stdev=1224.17 00:19:48.112 clat (usec): min=113, max=396, avg=152.51, stdev=16.80 00:19:48.112 lat (usec): min=122, max=432, avg=162.09, stdev=17.15 00:19:48.112 clat percentiles (usec): 00:19:48.112 | 1.00th=[ 123], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 141], 00:19:48.112 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 155], 00:19:48.112 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 172], 95.00th=[ 180], 00:19:48.112 | 99.00th=[ 194], 99.50th=[ 206], 99.90th=[ 289], 99.95th=[ 314], 00:19:48.112 | 99.99th=[ 396] 00:19:48.112 bw ( KiB/s): min=12288, max=12288, per=44.09%, avg=12288.00, stdev= 0.00, samples=1 00:19:48.112 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:19:48.112 lat (usec) : 250=95.76%, 500=4.24% 00:19:48.112 cpu : usr=2.50%, sys=4.60%, ctx=5212, majf=0, minf=2 00:19:48.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.112 issued rwts: total=2560,2652,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.112 job1: (groupid=0, jobs=1): err= 0: pid=305330: Wed May 15 08:31:34 2024 00:19:48.112 read: IOPS=2303, BW=9215KiB/s (9436kB/s)(9224KiB/1001msec) 00:19:48.112 slat (nsec): min=6105, max=21583, avg=7180.58, stdev=875.04 00:19:48.112 clat (usec): min=168, max=1429, avg=229.99, stdev=50.32 00:19:48.112 lat (usec): min=175, max=1436, avg=237.17, stdev=50.32 00:19:48.112 clat percentiles (usec): 00:19:48.112 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:19:48.112 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:19:48.112 | 70.00th=[ 233], 80.00th=[ 249], 90.00th=[ 281], 95.00th=[ 318], 00:19:48.112 | 99.00th=[ 392], 99.50th=[ 416], 99.90th=[ 445], 99.95th=[ 445], 00:19:48.112 | 99.99th=[ 1434] 00:19:48.112 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:19:48.112 slat (nsec): min=8474, max=42295, avg=9662.60, stdev=1408.42 00:19:48.112 clat (usec): min=116, max=366, avg=163.08, stdev=20.59 00:19:48.112 lat (usec): min=126, max=395, avg=172.74, stdev=20.75 00:19:48.112 clat percentiles (usec): 00:19:48.112 | 1.00th=[ 126], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 147], 00:19:48.112 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:19:48.112 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 194], 00:19:48.112 | 99.00th=[ 239], 99.50th=[ 255], 99.90th=[ 314], 99.95th=[ 355], 00:19:48.112 | 99.99th=[ 367] 00:19:48.112 bw ( KiB/s): min=12096, max=12096, per=43.40%, avg=12096.00, stdev= 0.00, samples=1 00:19:48.112 iops : min= 3024, max= 3024, avg=3024.00, stdev= 0.00, samples=1 00:19:48.112 lat (usec) : 250=90.38%, 500=9.60% 00:19:48.112 lat (msec) : 2=0.02% 00:19:48.112 cpu : usr=2.70%, sys=4.10%, ctx=4866, majf=0, minf=1 00:19:48.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.112 issued rwts: total=2306,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.112 job2: (groupid=0, jobs=1): err= 0: pid=305340: Wed May 15 08:31:34 2024 00:19:48.112 read: IOPS=25, BW=104KiB/s (106kB/s)(104KiB/1002msec) 00:19:48.112 slat (nsec): min=7168, max=22912, avg=19433.69, stdev=5324.06 00:19:48.112 clat (usec): min=264, max=42149, avg=34884.59, stdev=15036.84 00:19:48.112 lat (usec): min=274, max=42156, avg=34904.02, stdev=15037.11 00:19:48.112 clat percentiles (usec): 00:19:48.112 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[40633], 00:19:48.112 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:48.112 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:19:48.112 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:48.112 | 99.99th=[42206] 00:19:48.112 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:19:48.112 slat (nsec): min=9069, max=43373, avg=10094.14, stdev=1727.03 00:19:48.112 clat (usec): min=141, max=378, avg=172.41, stdev=16.11 00:19:48.112 lat (usec): min=151, max=422, avg=182.51, stdev=17.02 00:19:48.112 clat percentiles (usec): 00:19:48.112 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:19:48.112 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:19:48.112 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 194], 00:19:48.112 | 99.00th=[ 206], 99.50th=[ 217], 99.90th=[ 379], 99.95th=[ 379], 00:19:48.112 | 99.99th=[ 379] 00:19:48.112 bw ( KiB/s): min= 4096, max= 4096, per=14.70%, avg=4096.00, stdev= 0.00, samples=1 00:19:48.112 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:48.112 lat (usec) : 250=94.98%, 500=0.93% 00:19:48.112 lat (msec) : 50=4.09% 00:19:48.112 cpu : usr=0.20%, sys=0.50%, ctx=538, majf=0, minf=1 00:19:48.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.112 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.112 job3: (groupid=0, jobs=1): err= 0: pid=305346: Wed May 15 08:31:34 2024 00:19:48.112 read: IOPS=994, BW=3977KiB/s (4072kB/s)(4144KiB/1042msec) 00:19:48.112 slat (nsec): min=7843, max=44820, avg=10556.63, stdev=5729.49 00:19:48.112 clat (usec): min=176, max=41990, avg=704.73, stdev=4402.05 00:19:48.112 lat (usec): min=199, max=42002, avg=715.29, stdev=4402.38 00:19:48.112 clat percentiles (usec): 00:19:48.112 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 215], 00:19:48.112 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:19:48.112 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 265], 00:19:48.112 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:48.112 | 99.99th=[42206] 00:19:48.112 write: IOPS=1474, BW=5896KiB/s (6038kB/s)(6144KiB/1042msec); 0 zone resets 00:19:48.112 slat (nsec): min=10646, max=77077, avg=14188.39, stdev=6838.15 00:19:48.112 clat (usec): min=120, max=2118, avg=175.13, stdev=52.94 00:19:48.112 lat (usec): min=147, max=2136, avg=189.32, stdev=53.72 00:19:48.112 clat percentiles (usec): 00:19:48.112 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:19:48.112 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:19:48.112 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 206], 00:19:48.112 | 99.00th=[ 225], 99.50th=[ 249], 99.90th=[ 379], 99.95th=[ 2114], 00:19:48.113 | 99.99th=[ 2114] 00:19:48.113 bw ( KiB/s): min= 1944, max=10344, per=22.05%, avg=6144.00, stdev=5939.70, samples=2 00:19:48.113 iops : min= 486, max= 2586, avg=1536.00, stdev=1484.92, samples=2 00:19:48.113 lat (usec) : 250=94.79%, 500=4.70% 00:19:48.113 lat (msec) : 4=0.04%, 50=0.47% 00:19:48.113 cpu : usr=2.40%, sys=3.94%, ctx=2573, majf=0, minf=1 00:19:48.113 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.113 issued rwts: total=1036,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.113 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.113 00:19:48.113 Run status group 0 (all jobs): 00:19:48.113 READ: bw=22.2MiB/s (23.3MB/s), 104KiB/s-9.99MiB/s (106kB/s-10.5MB/s), io=23.2MiB (24.3MB), run=1001-1042msec 00:19:48.113 WRITE: bw=27.2MiB/s (28.5MB/s), 2044KiB/s-10.3MiB/s (2093kB/s-10.9MB/s), io=28.4MiB (29.7MB), run=1001-1042msec 00:19:48.113 00:19:48.113 Disk stats (read/write): 00:19:48.113 nvme0n1: ios=2098/2560, merge=0/0, ticks=434/377, in_queue=811, util=86.97% 00:19:48.113 nvme0n2: ios=2027/2048, merge=0/0, ticks=554/322, in_queue=876, util=91.17% 00:19:48.113 nvme0n3: ios=22/512, merge=0/0, ticks=743/90, in_queue=833, util=88.97% 00:19:48.113 nvme0n4: ios=1031/1536, merge=0/0, ticks=517/256, in_queue=773, util=89.73% 00:19:48.113 08:31:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:48.113 [global] 00:19:48.113 thread=1 00:19:48.113 invalidate=1 00:19:48.113 rw=write 00:19:48.113 time_based=1 00:19:48.113 runtime=1 00:19:48.113 ioengine=libaio 00:19:48.113 direct=1 00:19:48.113 bs=4096 00:19:48.113 iodepth=128 00:19:48.113 norandommap=0 00:19:48.113 numjobs=1 00:19:48.113 00:19:48.113 verify_dump=1 00:19:48.113 verify_backlog=512 00:19:48.113 verify_state_save=0 00:19:48.113 do_verify=1 00:19:48.113 verify=crc32c-intel 00:19:48.113 [job0] 00:19:48.113 filename=/dev/nvme0n1 00:19:48.113 [job1] 00:19:48.113 filename=/dev/nvme0n2 00:19:48.113 [job2] 00:19:48.113 filename=/dev/nvme0n3 00:19:48.113 [job3] 00:19:48.113 filename=/dev/nvme0n4 00:19:48.113 Could not set queue depth (nvme0n1) 00:19:48.113 Could not set queue depth (nvme0n2) 00:19:48.113 Could not set queue depth (nvme0n3) 00:19:48.113 Could not set queue depth (nvme0n4) 00:19:48.370 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.370 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.370 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.370 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.370 fio-3.35 00:19:48.370 Starting 4 threads 00:19:49.745 00:19:49.745 job0: (groupid=0, jobs=1): err= 0: pid=305785: Wed May 15 08:31:36 2024 00:19:49.745 read: IOPS=2112, BW=8451KiB/s (8654kB/s)(8536KiB/1010msec) 00:19:49.745 slat (nsec): min=1230, max=12607k, avg=185706.44, stdev=1128481.31 00:19:49.745 clat (msec): min=4, max=100, avg=18.78, stdev=15.35 00:19:49.745 lat (msec): min=4, max=100, avg=18.96, stdev=15.52 00:19:49.745 clat percentiles (msec): 00:19:49.745 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:19:49.745 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 15], 60.00th=[ 17], 00:19:49.745 | 70.00th=[ 19], 80.00th=[ 22], 90.00th=[ 34], 95.00th=[ 54], 00:19:49.745 | 99.00th=[ 84], 99.50th=[ 93], 99.90th=[ 102], 99.95th=[ 102], 00:19:49.745 | 99.99th=[ 102] 00:19:49.745 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:19:49.745 slat (usec): min=2, max=15789, avg=229.02, stdev=1141.65 00:19:49.745 clat (msec): min=3, max=115, avg=34.41, stdev=30.66 00:19:49.745 lat (msec): min=3, max=115, avg=34.64, stdev=30.86 00:19:49.745 clat percentiles (msec): 00:19:49.745 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 10], 00:19:49.745 | 30.00th=[ 16], 40.00th=[ 19], 50.00th=[ 23], 60.00th=[ 27], 00:19:49.745 | 70.00th=[ 33], 80.00th=[ 71], 90.00th=[ 89], 95.00th=[ 97], 00:19:49.745 | 99.00th=[ 111], 99.50th=[ 115], 99.90th=[ 116], 99.95th=[ 116], 00:19:49.745 | 99.99th=[ 116] 00:19:49.745 bw ( KiB/s): min=10056, max=10096, per=19.11%, avg=10076.00, stdev=28.28, samples=2 00:19:49.745 iops : min= 2514, max= 2524, avg=2519.00, stdev= 7.07, samples=2 00:19:49.745 lat (msec) : 4=0.13%, 10=23.11%, 20=34.36%, 50=26.59%, 100=13.72% 00:19:49.745 lat (msec) : 250=2.09% 00:19:49.745 cpu : usr=2.87%, sys=3.17%, ctx=263, majf=0, minf=1 00:19:49.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:49.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:49.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:49.745 issued rwts: total=2134,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:49.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:49.745 job1: (groupid=0, jobs=1): err= 0: pid=305799: Wed May 15 08:31:36 2024 00:19:49.745 read: IOPS=3957, BW=15.5MiB/s (16.2MB/s)(15.6MiB/1006msec) 00:19:49.745 slat (nsec): min=1005, max=44498k, avg=119023.51, stdev=1167387.43 00:19:49.745 clat (msec): min=2, max=100, avg=15.74, stdev=14.04 00:19:49.745 lat (msec): min=2, max=100, avg=15.86, stdev=14.13 00:19:49.745 clat percentiles (msec): 00:19:49.745 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 9], 00:19:49.745 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 15], 00:19:49.745 | 70.00th=[ 16], 80.00th=[ 19], 90.00th=[ 26], 95.00th=[ 35], 00:19:49.745 | 99.00th=[ 93], 99.50th=[ 101], 99.90th=[ 101], 99.95th=[ 101], 00:19:49.745 | 99.99th=[ 101] 00:19:49.745 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:19:49.745 slat (nsec): min=1706, max=18655k, avg=110219.96, stdev=807980.64 00:19:49.745 clat (usec): min=3203, max=87156, avg=15865.19, stdev=15596.02 00:19:49.745 lat (usec): min=3208, max=87172, avg=15975.41, stdev=15708.21 00:19:49.745 clat percentiles (usec): 00:19:49.745 | 1.00th=[ 4113], 5.00th=[ 5211], 10.00th=[ 5932], 20.00th=[ 7308], 00:19:49.745 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[11600], 00:19:49.745 | 70.00th=[13042], 80.00th=[18744], 90.00th=[33162], 95.00th=[53740], 00:19:49.745 | 99.00th=[79168], 99.50th=[82314], 99.90th=[84411], 99.95th=[84411], 00:19:49.745 | 99.99th=[87557] 00:19:49.745 bw ( KiB/s): min=16384, max=16384, per=31.08%, avg=16384.00, stdev= 0.00, samples=2 00:19:49.745 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:19:49.745 lat (msec) : 4=2.01%, 10=41.09%, 20=39.21%, 50=13.52%, 100=3.79% 00:19:49.745 lat (msec) : 250=0.38% 00:19:49.745 cpu : usr=3.18%, sys=4.68%, ctx=248, majf=0, minf=1 00:19:49.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:49.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:49.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:49.745 issued rwts: total=3981,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:49.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:49.745 job2: (groupid=0, jobs=1): err= 0: pid=305813: Wed May 15 08:31:36 2024 00:19:49.745 read: IOPS=3466, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1006msec) 00:19:49.745 slat (nsec): min=1332, max=20286k, avg=127292.76, stdev=999115.58 00:19:49.745 clat (usec): min=781, max=86546, avg=15919.65, stdev=10009.11 00:19:49.745 lat (usec): min=788, max=86556, avg=16046.94, stdev=10113.19 00:19:49.745 clat percentiles (usec): 00:19:49.745 | 1.00th=[ 1205], 5.00th=[ 2737], 10.00th=[ 6652], 20.00th=[11207], 00:19:49.745 | 30.00th=[11863], 40.00th=[12780], 50.00th=[14353], 60.00th=[16057], 00:19:49.745 | 70.00th=[17957], 80.00th=[18744], 90.00th=[26084], 95.00th=[27919], 00:19:49.745 | 99.00th=[67634], 99.50th=[77071], 99.90th=[86508], 99.95th=[86508], 00:19:49.745 | 99.99th=[86508] 00:19:49.745 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:19:49.745 slat (usec): min=2, max=26783, avg=90.27, stdev=671.98 00:19:49.745 clat (usec): min=190, max=115310, avg=17632.55, stdev=20102.93 00:19:49.745 lat (usec): min=201, max=115317, avg=17722.82, stdev=20171.32 00:19:49.745 clat percentiles (usec): 00:19:49.745 | 1.00th=[ 725], 5.00th=[ 775], 10.00th=[ 1549], 20.00th=[ 3523], 00:19:49.745 | 30.00th=[ 6652], 40.00th=[ 8094], 50.00th=[ 9634], 60.00th=[ 11994], 00:19:49.745 | 70.00th=[ 22938], 80.00th=[ 29230], 90.00th=[ 36439], 95.00th=[ 53740], 00:19:49.745 | 99.00th=[105382], 99.50th=[109577], 99.90th=[114820], 99.95th=[114820], 00:19:49.745 | 99.99th=[114820] 00:19:49.745 bw ( KiB/s): min=15816, max=16952, per=31.08%, avg=16384.00, stdev=803.27, samples=2 00:19:49.745 iops : min= 3954, max= 4238, avg=4096.00, stdev=200.82, samples=2 00:19:49.745 lat (usec) : 250=0.01%, 500=0.21%, 750=1.11%, 1000=3.57% 00:19:49.745 lat (msec) : 2=3.05%, 4=6.09%, 10=22.46%, 20=37.65%, 50=21.65% 00:19:49.746 lat (msec) : 100=3.47%, 250=0.73% 00:19:49.746 cpu : usr=2.79%, sys=4.98%, ctx=361, majf=0, minf=1 00:19:49.746 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:49.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:49.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:49.746 issued rwts: total=3487,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:49.746 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:49.746 job3: (groupid=0, jobs=1): err= 0: pid=305819: Wed May 15 08:31:36 2024 00:19:49.746 read: IOPS=2135, BW=8540KiB/s (8745kB/s)(8600KiB/1007msec) 00:19:49.746 slat (nsec): min=1353, max=22737k, avg=160423.53, stdev=1199199.27 00:19:49.746 clat (usec): min=3776, max=69973, avg=18719.91, stdev=12973.83 00:19:49.746 lat (usec): min=3784, max=81051, avg=18880.33, stdev=13095.16 00:19:49.746 clat percentiles (usec): 00:19:49.746 | 1.00th=[ 7242], 5.00th=[ 9110], 10.00th=[ 9110], 20.00th=[ 9503], 00:19:49.746 | 30.00th=[11207], 40.00th=[12387], 50.00th=[12780], 60.00th=[15795], 00:19:49.746 | 70.00th=[17433], 80.00th=[23200], 90.00th=[39584], 95.00th=[48497], 00:19:49.746 | 99.00th=[64750], 99.50th=[67634], 99.90th=[69731], 99.95th=[69731], 00:19:49.746 | 99.99th=[69731] 00:19:49.746 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:19:49.746 slat (usec): min=2, max=27924, avg=248.08, stdev=1313.95 00:19:49.746 clat (usec): min=1042, max=120874, avg=34063.29, stdev=30253.50 00:19:49.746 lat (usec): min=1050, max=120885, avg=34311.37, stdev=30467.57 00:19:49.746 clat percentiles (msec): 00:19:49.746 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:19:49.746 | 30.00th=[ 15], 40.00th=[ 19], 50.00th=[ 23], 60.00th=[ 27], 00:19:49.746 | 70.00th=[ 33], 80.00th=[ 65], 90.00th=[ 88], 95.00th=[ 94], 00:19:49.746 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 122], 00:19:49.746 | 99.99th=[ 122] 00:19:49.746 bw ( KiB/s): min= 7992, max=12288, per=19.23%, avg=10140.00, stdev=3037.73, samples=2 00:19:49.746 iops : min= 1998, max= 3072, avg=2535.00, stdev=759.43, samples=2 00:19:49.746 lat (msec) : 2=0.04%, 4=0.47%, 10=21.83%, 20=37.62%, 50=25.41% 00:19:49.746 lat (msec) : 100=12.63%, 250=2.00% 00:19:49.746 cpu : usr=2.39%, sys=3.28%, ctx=250, majf=0, minf=1 00:19:49.746 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:49.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:49.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:49.746 issued rwts: total=2150,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:49.746 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:49.746 00:19:49.746 Run status group 0 (all jobs): 00:19:49.746 READ: bw=45.5MiB/s (47.7MB/s), 8451KiB/s-15.5MiB/s (8654kB/s-16.2MB/s), io=45.9MiB (48.1MB), run=1006-1010msec 00:19:49.746 WRITE: bw=51.5MiB/s (54.0MB/s), 9.90MiB/s-15.9MiB/s (10.4MB/s-16.7MB/s), io=52.0MiB (54.5MB), run=1006-1010msec 00:19:49.746 00:19:49.746 Disk stats (read/write): 00:19:49.746 nvme0n1: ios=2098/2231, merge=0/0, ticks=34663/69345, in_queue=104008, util=87.27% 00:19:49.746 nvme0n2: ios=3085/3455, merge=0/0, ticks=32277/47436, in_queue=79713, util=87.12% 00:19:49.746 nvme0n3: ios=3072/3847, merge=0/0, ticks=42597/59881, in_queue=102478, util=88.98% 00:19:49.746 nvme0n4: ios=1753/2048, merge=0/0, ticks=19214/40484, in_queue=59698, util=96.02% 00:19:49.746 08:31:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:49.746 [global] 00:19:49.746 thread=1 00:19:49.746 invalidate=1 00:19:49.746 rw=randwrite 00:19:49.746 time_based=1 00:19:49.746 runtime=1 00:19:49.746 ioengine=libaio 00:19:49.746 direct=1 00:19:49.746 bs=4096 00:19:49.746 iodepth=128 00:19:49.746 norandommap=0 00:19:49.746 numjobs=1 00:19:49.746 00:19:49.746 verify_dump=1 00:19:49.746 verify_backlog=512 00:19:49.746 verify_state_save=0 00:19:49.746 do_verify=1 00:19:49.746 verify=crc32c-intel 00:19:49.746 [job0] 00:19:49.746 filename=/dev/nvme0n1 00:19:49.746 [job1] 00:19:49.746 filename=/dev/nvme0n2 00:19:49.746 [job2] 00:19:49.746 filename=/dev/nvme0n3 00:19:49.746 [job3] 00:19:49.746 filename=/dev/nvme0n4 00:19:49.746 Could not set queue depth (nvme0n1) 00:19:49.746 Could not set queue depth (nvme0n2) 00:19:49.746 Could not set queue depth (nvme0n3) 00:19:49.746 Could not set queue depth (nvme0n4) 00:19:50.004 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:50.004 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:50.004 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:50.004 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:50.004 fio-3.35 00:19:50.004 Starting 4 threads 00:19:51.400 00:19:51.400 job0: (groupid=0, jobs=1): err= 0: pid=306239: Wed May 15 08:31:38 2024 00:19:51.400 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:19:51.400 slat (nsec): min=1364, max=13330k, avg=191342.30, stdev=1154516.55 00:19:51.400 clat (usec): min=7918, max=70668, avg=24943.23, stdev=10192.42 00:19:51.400 lat (usec): min=7921, max=76129, avg=25134.58, stdev=10296.73 00:19:51.400 clat percentiles (usec): 00:19:51.400 | 1.00th=[10028], 5.00th=[13566], 10.00th=[14222], 20.00th=[15401], 00:19:51.400 | 30.00th=[17171], 40.00th=[18482], 50.00th=[22676], 60.00th=[27132], 00:19:51.400 | 70.00th=[31065], 80.00th=[34341], 90.00th=[38536], 95.00th=[42206], 00:19:51.400 | 99.00th=[51643], 99.50th=[68682], 99.90th=[70779], 99.95th=[70779], 00:19:51.400 | 99.99th=[70779] 00:19:51.400 write: IOPS=2805, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1008msec); 0 zone resets 00:19:51.400 slat (usec): min=2, max=10964, avg=175.24, stdev=1005.69 00:19:51.400 clat (usec): min=5713, max=45001, avg=22384.74, stdev=6963.36 00:19:51.400 lat (usec): min=6160, max=45774, avg=22559.98, stdev=7047.43 00:19:51.400 clat percentiles (usec): 00:19:51.400 | 1.00th=[10159], 5.00th=[14091], 10.00th=[14484], 20.00th=[15139], 00:19:51.400 | 30.00th=[17171], 40.00th=[20841], 50.00th=[21890], 60.00th=[23200], 00:19:51.400 | 70.00th=[27395], 80.00th=[28443], 90.00th=[30278], 95.00th=[35390], 00:19:51.400 | 99.00th=[42730], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:19:51.400 | 99.99th=[44827] 00:19:51.400 bw ( KiB/s): min=10472, max=11128, per=16.11%, avg=10800.00, stdev=463.86, samples=2 00:19:51.400 iops : min= 2618, max= 2782, avg=2700.00, stdev=115.97, samples=2 00:19:51.400 lat (msec) : 10=0.95%, 20=41.05%, 50=57.33%, 100=0.67% 00:19:51.400 cpu : usr=1.89%, sys=3.08%, ctx=217, majf=0, minf=1 00:19:51.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:51.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.400 issued rwts: total=2560,2828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.400 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.400 job1: (groupid=0, jobs=1): err= 0: pid=306253: Wed May 15 08:31:38 2024 00:19:51.400 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:19:51.400 slat (nsec): min=1069, max=14959k, avg=98594.54, stdev=649013.25 00:19:51.400 clat (usec): min=5352, max=50921, avg=12645.09, stdev=5587.01 00:19:51.400 lat (usec): min=5354, max=50968, avg=12743.69, stdev=5644.51 00:19:51.400 clat percentiles (usec): 00:19:51.400 | 1.00th=[ 5800], 5.00th=[ 8225], 10.00th=[ 9372], 20.00th=[10552], 00:19:51.400 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11600], 00:19:51.400 | 70.00th=[11994], 80.00th=[12911], 90.00th=[15401], 95.00th=[25297], 00:19:51.401 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43779], 99.95th=[48497], 00:19:51.401 | 99.99th=[51119] 00:19:51.401 write: IOPS=5289, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1009msec); 0 zone resets 00:19:51.401 slat (nsec): min=1757, max=10147k, avg=88126.98, stdev=532610.59 00:19:51.401 clat (usec): min=792, max=44506, avg=11740.84, stdev=3713.03 00:19:51.401 lat (usec): min=5578, max=44519, avg=11828.96, stdev=3746.60 00:19:51.401 clat percentiles (usec): 00:19:51.401 | 1.00th=[ 5800], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10683], 00:19:51.401 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11076], 00:19:51.401 | 70.00th=[11207], 80.00th=[11469], 90.00th=[12649], 95.00th=[19792], 00:19:51.401 | 99.00th=[31851], 99.50th=[36439], 99.90th=[38011], 99.95th=[38011], 00:19:51.401 | 99.99th=[44303] 00:19:51.401 bw ( KiB/s): min=18264, max=23408, per=31.08%, avg=20836.00, stdev=3637.36, samples=2 00:19:51.401 iops : min= 4566, max= 5852, avg=5209.00, stdev=909.34, samples=2 00:19:51.401 lat (usec) : 1000=0.01% 00:19:51.401 lat (msec) : 10=12.19%, 20=82.34%, 50=5.45%, 100=0.01% 00:19:51.401 cpu : usr=4.46%, sys=6.05%, ctx=405, majf=0, minf=1 00:19:51.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:51.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.401 issued rwts: total=5120,5337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.401 job2: (groupid=0, jobs=1): err= 0: pid=306271: Wed May 15 08:31:38 2024 00:19:51.401 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:19:51.401 slat (nsec): min=1579, max=11194k, avg=114871.66, stdev=654416.66 00:19:51.401 clat (usec): min=6733, max=28061, avg=14295.98, stdev=3734.20 00:19:51.401 lat (usec): min=6737, max=28088, avg=14410.85, stdev=3782.37 00:19:51.401 clat percentiles (usec): 00:19:51.401 | 1.00th=[ 8094], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[11338], 00:19:51.401 | 30.00th=[11863], 40.00th=[12256], 50.00th=[14091], 60.00th=[14877], 00:19:51.401 | 70.00th=[16057], 80.00th=[16909], 90.00th=[19792], 95.00th=[22152], 00:19:51.401 | 99.00th=[24511], 99.50th=[25297], 99.90th=[27657], 99.95th=[27657], 00:19:51.401 | 99.99th=[28181] 00:19:51.401 write: IOPS=4144, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1003msec); 0 zone resets 00:19:51.401 slat (usec): min=2, max=8868, avg=120.60, stdev=640.21 00:19:51.401 clat (usec): min=403, max=41973, avg=16340.84, stdev=6252.29 00:19:51.401 lat (usec): min=3030, max=41983, avg=16461.44, stdev=6299.21 00:19:51.401 clat percentiles (usec): 00:19:51.401 | 1.00th=[ 4752], 5.00th=[10028], 10.00th=[11338], 20.00th=[12125], 00:19:51.401 | 30.00th=[12387], 40.00th=[13173], 50.00th=[13960], 60.00th=[15139], 00:19:51.401 | 70.00th=[18744], 80.00th=[21365], 90.00th=[24249], 95.00th=[30016], 00:19:51.401 | 99.00th=[36439], 99.50th=[39584], 99.90th=[42206], 99.95th=[42206], 00:19:51.401 | 99.99th=[42206] 00:19:51.401 bw ( KiB/s): min=13688, max=19080, per=24.44%, avg=16384.00, stdev=3812.72, samples=2 00:19:51.401 iops : min= 3422, max= 4770, avg=4096.00, stdev=953.18, samples=2 00:19:51.401 lat (usec) : 500=0.01% 00:19:51.401 lat (msec) : 4=0.39%, 10=5.34%, 20=79.17%, 50=15.09% 00:19:51.401 cpu : usr=3.79%, sys=5.39%, ctx=437, majf=0, minf=1 00:19:51.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:51.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.401 issued rwts: total=4096,4157,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.401 job3: (groupid=0, jobs=1): err= 0: pid=306276: Wed May 15 08:31:38 2024 00:19:51.401 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:19:51.401 slat (nsec): min=1468, max=11537k, avg=117623.34, stdev=714731.22 00:19:51.401 clat (usec): min=7199, max=33182, avg=14967.74, stdev=3930.85 00:19:51.401 lat (usec): min=7476, max=33205, avg=15085.36, stdev=3999.26 00:19:51.401 clat percentiles (usec): 00:19:51.401 | 1.00th=[ 8586], 5.00th=[ 9896], 10.00th=[11338], 20.00th=[11731], 00:19:51.401 | 30.00th=[11994], 40.00th=[12911], 50.00th=[14222], 60.00th=[15533], 00:19:51.401 | 70.00th=[16712], 80.00th=[17957], 90.00th=[20055], 95.00th=[21890], 00:19:51.401 | 99.00th=[27395], 99.50th=[28705], 99.90th=[32900], 99.95th=[32900], 00:19:51.401 | 99.99th=[33162] 00:19:51.401 write: IOPS=4550, BW=17.8MiB/s (18.6MB/s)(17.9MiB/1009msec); 0 zone resets 00:19:51.401 slat (usec): min=2, max=10126, avg=108.00, stdev=637.31 00:19:51.401 clat (usec): min=810, max=38301, avg=14450.87, stdev=5556.28 00:19:51.401 lat (usec): min=6384, max=38313, avg=14558.87, stdev=5607.24 00:19:51.401 clat percentiles (usec): 00:19:51.401 | 1.00th=[ 7504], 5.00th=[ 9765], 10.00th=[11076], 20.00th=[11600], 00:19:51.401 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:19:51.401 | 70.00th=[14484], 80.00th=[16581], 90.00th=[21627], 95.00th=[28705], 00:19:51.401 | 99.00th=[36439], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:19:51.401 | 99.99th=[38536] 00:19:51.401 bw ( KiB/s): min=15224, max=20480, per=26.63%, avg=17852.00, stdev=3716.55, samples=2 00:19:51.401 iops : min= 3806, max= 5120, avg=4463.00, stdev=929.14, samples=2 00:19:51.401 lat (usec) : 1000=0.01% 00:19:51.401 lat (msec) : 10=5.58%, 20=83.14%, 50=11.27% 00:19:51.401 cpu : usr=4.27%, sys=5.36%, ctx=425, majf=0, minf=1 00:19:51.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:51.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.401 issued rwts: total=4096,4591,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.401 00:19:51.401 Run status group 0 (all jobs): 00:19:51.401 READ: bw=61.4MiB/s (64.4MB/s), 9.92MiB/s-19.8MiB/s (10.4MB/s-20.8MB/s), io=62.0MiB (65.0MB), run=1003-1009msec 00:19:51.401 WRITE: bw=65.5MiB/s (68.7MB/s), 11.0MiB/s-20.7MiB/s (11.5MB/s-21.7MB/s), io=66.1MiB (69.3MB), run=1003-1009msec 00:19:51.401 00:19:51.401 Disk stats (read/write): 00:19:51.401 nvme0n1: ios=2199/2560, merge=0/0, ticks=19587/18553, in_queue=38140, util=95.89% 00:19:51.401 nvme0n2: ios=4559/4608, merge=0/0, ticks=22979/19033, in_queue=42012, util=96.45% 00:19:51.401 nvme0n3: ios=3166/3584, merge=0/0, ticks=24278/28404, in_queue=52682, util=100.00% 00:19:51.401 nvme0n4: ios=3584/3952, merge=0/0, ticks=26076/25972, in_queue=52048, util=89.75% 00:19:51.401 08:31:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:51.401 08:31:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=306374 00:19:51.401 08:31:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:51.401 08:31:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:51.401 [global] 00:19:51.401 thread=1 00:19:51.401 invalidate=1 00:19:51.401 rw=read 00:19:51.401 time_based=1 00:19:51.401 runtime=10 00:19:51.401 ioengine=libaio 00:19:51.401 direct=1 00:19:51.401 bs=4096 00:19:51.401 iodepth=1 00:19:51.401 norandommap=1 00:19:51.401 numjobs=1 00:19:51.401 00:19:51.401 [job0] 00:19:51.401 filename=/dev/nvme0n1 00:19:51.401 [job1] 00:19:51.401 filename=/dev/nvme0n2 00:19:51.401 [job2] 00:19:51.401 filename=/dev/nvme0n3 00:19:51.401 [job3] 00:19:51.401 filename=/dev/nvme0n4 00:19:51.401 Could not set queue depth (nvme0n1) 00:19:51.401 Could not set queue depth (nvme0n2) 00:19:51.401 Could not set queue depth (nvme0n3) 00:19:51.401 Could not set queue depth (nvme0n4) 00:19:51.663 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:51.663 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:51.663 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:51.663 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:51.663 fio-3.35 00:19:51.663 Starting 4 threads 00:19:54.200 08:31:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:54.459 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=38981632, buflen=4096 00:19:54.459 fio: pid=306657, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:54.459 08:31:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:54.719 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=34492416, buflen=4096 00:19:54.719 fio: pid=306656, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:54.719 08:31:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:54.719 08:31:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:54.719 08:31:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:54.719 08:31:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:54.719 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=2547712, buflen=4096 00:19:54.719 fio: pid=306654, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:54.979 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=53440512, buflen=4096 00:19:54.979 fio: pid=306655, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:54.979 08:31:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:54.979 08:31:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:54.979 00:19:54.979 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=306654: Wed May 15 08:31:41 2024 00:19:54.979 read: IOPS=199, BW=798KiB/s (817kB/s)(2488KiB/3117msec) 00:19:54.979 slat (usec): min=5, max=12715, avg=49.38, stdev=717.74 00:19:54.979 clat (usec): min=204, max=42007, avg=4926.58, stdev=12980.98 00:19:54.979 lat (usec): min=215, max=54011, avg=4955.69, stdev=13051.41 00:19:54.979 clat percentiles (usec): 00:19:54.979 | 1.00th=[ 231], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 255], 00:19:54.979 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:19:54.979 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[41157], 95.00th=[41157], 00:19:54.979 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:19:54.979 | 99.99th=[42206] 00:19:54.979 bw ( KiB/s): min= 96, max= 2456, per=2.05%, avg=791.67, stdev=927.93, samples=6 00:19:54.979 iops : min= 24, max= 614, avg=197.83, stdev=231.95, samples=6 00:19:54.979 lat (usec) : 250=11.08%, 500=77.21%, 750=0.16% 00:19:54.979 lat (msec) : 50=11.40% 00:19:54.979 cpu : usr=0.10%, sys=0.19%, ctx=625, majf=0, minf=1 00:19:54.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.979 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.979 issued rwts: total=623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:54.979 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=306655: Wed May 15 08:31:41 2024 00:19:54.979 read: IOPS=3986, BW=15.6MiB/s (16.3MB/s)(51.0MiB/3273msec) 00:19:54.979 slat (usec): min=6, max=33364, avg=14.86, stdev=365.09 00:19:54.979 clat (usec): min=157, max=3867, avg=232.41, stdev=75.13 00:19:54.979 lat (usec): min=164, max=33760, avg=247.27, stdev=375.86 00:19:54.979 clat percentiles (usec): 00:19:54.979 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 198], 00:19:54.979 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 233], 00:19:54.979 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 289], 00:19:54.979 | 99.00th=[ 474], 99.50th=[ 498], 99.90th=[ 1287], 99.95th=[ 1434], 00:19:54.979 | 99.99th=[ 3064] 00:19:54.979 bw ( KiB/s): min=14080, max=17872, per=41.96%, avg=16207.17, stdev=1446.14, samples=6 00:19:54.979 iops : min= 3520, max= 4468, avg=4051.67, stdev=361.55, samples=6 00:19:54.979 lat (usec) : 250=81.48%, 500=18.08%, 750=0.30% 00:19:54.979 lat (msec) : 2=0.11%, 4=0.02% 00:19:54.979 cpu : usr=1.89%, sys=6.63%, ctx=13056, majf=0, minf=1 00:19:54.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.979 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.979 issued rwts: total=13048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:54.979 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=306656: Wed May 15 08:31:41 2024 00:19:54.979 read: IOPS=2907, BW=11.4MiB/s (11.9MB/s)(32.9MiB/2897msec) 00:19:54.979 slat (usec): min=6, max=11661, avg=12.19, stdev=150.22 00:19:54.979 clat (usec): min=170, max=45003, avg=327.11, stdev=1951.20 00:19:54.979 lat (usec): min=178, max=45012, avg=339.30, stdev=1957.15 00:19:54.979 clat percentiles (usec): 00:19:54.979 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 212], 00:19:54.979 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:19:54.979 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 269], 00:19:54.979 | 99.00th=[ 351], 99.50th=[ 457], 99.90th=[41157], 99.95th=[41157], 00:19:54.979 | 99.99th=[44827] 00:19:54.979 bw ( KiB/s): min= 1840, max=17072, per=28.40%, avg=10972.80, stdev=6819.05, samples=5 00:19:54.979 iops : min= 460, max= 4268, avg=2743.20, stdev=1704.76, samples=5 00:19:54.979 lat (usec) : 250=81.63%, 500=17.95%, 750=0.04%, 1000=0.01% 00:19:54.979 lat (msec) : 2=0.11%, 4=0.01%, 20=0.01%, 50=0.23% 00:19:54.979 cpu : usr=1.45%, sys=5.04%, ctx=8424, majf=0, minf=1 00:19:54.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.979 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.979 issued rwts: total=8422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:54.979 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=306657: Wed May 15 08:31:41 2024 00:19:54.979 read: IOPS=3551, BW=13.9MiB/s (14.5MB/s)(37.2MiB/2680msec) 00:19:54.979 slat (usec): min=7, max=103, avg= 8.82, stdev= 2.86 00:19:54.979 clat (usec): min=180, max=42210, avg=269.71, stdev=731.83 00:19:54.979 lat (usec): min=193, max=42218, avg=278.53, stdev=731.82 00:19:54.979 clat percentiles (usec): 00:19:54.979 | 1.00th=[ 208], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 235], 00:19:54.979 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:19:54.979 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 310], 00:19:54.979 | 99.00th=[ 502], 99.50th=[ 515], 99.90th=[ 644], 99.95th=[ 1418], 00:19:54.979 | 99.99th=[42206] 00:19:54.979 bw ( KiB/s): min=13184, max=15504, per=37.58%, avg=14516.80, stdev=879.63, samples=5 00:19:54.979 iops : min= 3296, max= 3876, avg=3629.20, stdev=219.91, samples=5 00:19:54.979 lat (usec) : 250=57.68%, 500=41.28%, 750=0.95%, 1000=0.02% 00:19:54.979 lat (msec) : 2=0.03%, 50=0.03% 00:19:54.979 cpu : usr=1.79%, sys=5.86%, ctx=9519, majf=0, minf=2 00:19:54.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.979 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.979 issued rwts: total=9518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.980 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:54.980 00:19:54.980 Run status group 0 (all jobs): 00:19:54.980 READ: bw=37.7MiB/s (39.6MB/s), 798KiB/s-15.6MiB/s (817kB/s-16.3MB/s), io=123MiB (129MB), run=2680-3273msec 00:19:54.980 00:19:54.980 Disk stats (read/write): 00:19:54.980 nvme0n1: ios=623/0, merge=0/0, ticks=3071/0, in_queue=3071, util=95.01% 00:19:54.980 nvme0n2: ios=12606/0, merge=0/0, ticks=3781/0, in_queue=3781, util=97.93% 00:19:54.980 nvme0n3: ios=8330/0, merge=0/0, ticks=2634/0, in_queue=2634, util=95.94% 00:19:54.980 nvme0n4: ios=9322/0, merge=0/0, ticks=2385/0, in_queue=2385, util=96.45% 00:19:55.248 08:31:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:55.248 08:31:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:55.248 08:31:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:55.248 08:31:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:55.509 08:31:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:55.509 08:31:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:55.771 08:31:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:55.771 08:31:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:56.030 08:31:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:56.030 08:31:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 306374 00:19:56.030 08:31:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:56.030 08:31:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:56.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:56.030 08:31:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:56.030 08:31:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:19:56.030 08:31:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:56.030 08:31:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:56.030 08:31:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:56.030 08:31:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:56.030 08:31:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:19:56.030 08:31:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:56.030 08:31:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:56.030 nvmf hotplug test: fio failed as expected 00:19:56.030 08:31:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:56.292 rmmod nvme_tcp 00:19:56.292 rmmod nvme_fabrics 00:19:56.292 rmmod nvme_keyring 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 303588 ']' 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 303588 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 303588 ']' 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 303588 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 303588 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 303588' 00:19:56.292 killing process with pid 303588 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 303588 00:19:56.292 [2024-05-15 08:31:43.272170] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:56.292 08:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 303588 00:19:56.553 08:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:56.553 08:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:56.553 08:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:56.553 08:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:56.553 08:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:56.553 08:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.553 08:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.553 08:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.095 08:31:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:59.095 00:19:59.095 real 0m26.148s 00:19:59.095 user 1m46.721s 00:19:59.095 sys 0m8.249s 00:19:59.095 08:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:59.095 08:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.095 ************************************ 00:19:59.095 END TEST nvmf_fio_target 00:19:59.095 ************************************ 00:19:59.095 08:31:45 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:59.095 08:31:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:59.095 08:31:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:59.095 08:31:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:59.095 ************************************ 00:19:59.095 START TEST nvmf_bdevio 00:19:59.095 ************************************ 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:59.095 * Looking for test storage... 00:19:59.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.095 08:31:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:59.096 08:31:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:03.292 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:03.292 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:03.292 Found net devices under 0000:86:00.0: cvl_0_0 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:03.292 Found net devices under 0000:86:00.1: cvl_0_1 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:03.292 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:03.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:20:03.553 00:20:03.553 --- 10.0.0.2 ping statistics --- 00:20:03.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.553 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:03.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:20:03.553 00:20:03.553 --- 10.0.0.1 ping statistics --- 00:20:03.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.553 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=310658 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 310658 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 310658 ']' 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:03.553 08:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:03.554 08:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:03.554 [2024-05-15 08:31:50.438910] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:20:03.554 [2024-05-15 08:31:50.438952] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.554 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.554 [2024-05-15 08:31:50.496271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.554 [2024-05-15 08:31:50.576024] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.554 [2024-05-15 08:31:50.576057] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.554 [2024-05-15 08:31:50.576064] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.554 [2024-05-15 08:31:50.576070] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.554 [2024-05-15 08:31:50.576075] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.554 [2024-05-15 08:31:50.576408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:03.554 [2024-05-15 08:31:50.576442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:03.813 [2024-05-15 08:31:50.576977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:03.813 [2024-05-15 08:31:50.576977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:04.383 [2024-05-15 08:31:51.274065] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:04.383 Malloc0 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:04.383 [2024-05-15 08:31:51.325493] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:04.383 [2024-05-15 08:31:51.325720] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.383 { 00:20:04.383 "params": { 00:20:04.383 "name": "Nvme$subsystem", 00:20:04.383 "trtype": "$TEST_TRANSPORT", 00:20:04.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.383 "adrfam": "ipv4", 00:20:04.383 "trsvcid": "$NVMF_PORT", 00:20:04.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.383 "hdgst": ${hdgst:-false}, 00:20:04.383 "ddgst": ${ddgst:-false} 00:20:04.383 }, 00:20:04.383 "method": "bdev_nvme_attach_controller" 00:20:04.383 } 00:20:04.383 EOF 00:20:04.383 )") 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:20:04.383 08:31:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:04.383 "params": { 00:20:04.383 "name": "Nvme1", 00:20:04.383 "trtype": "tcp", 00:20:04.383 "traddr": "10.0.0.2", 00:20:04.383 "adrfam": "ipv4", 00:20:04.383 "trsvcid": "4420", 00:20:04.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:04.383 "hdgst": false, 00:20:04.383 "ddgst": false 00:20:04.384 }, 00:20:04.384 "method": "bdev_nvme_attach_controller" 00:20:04.384 }' 00:20:04.384 [2024-05-15 08:31:51.375185] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:20:04.384 [2024-05-15 08:31:51.375233] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310902 ] 00:20:04.384 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.643 [2024-05-15 08:31:51.430024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:04.643 [2024-05-15 08:31:51.504643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.643 [2024-05-15 08:31:51.504664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.643 [2024-05-15 08:31:51.504665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.900 I/O targets: 00:20:04.900 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:04.900 00:20:04.900 00:20:04.900 CUnit - A unit testing framework for C - Version 2.1-3 00:20:04.900 http://cunit.sourceforge.net/ 00:20:04.900 00:20:04.900 00:20:04.900 Suite: bdevio tests on: Nvme1n1 00:20:04.900 Test: blockdev write read block ...passed 00:20:05.158 Test: blockdev write zeroes read block ...passed 00:20:05.158 Test: blockdev write zeroes read no split ...passed 00:20:05.158 Test: blockdev write zeroes read split ...passed 00:20:05.158 Test: blockdev write zeroes read split partial ...passed 00:20:05.158 Test: blockdev reset ...[2024-05-15 08:31:52.055876] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:05.158 [2024-05-15 08:31:52.055937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209a7f0 (9): Bad file descriptor 00:20:05.158 [2024-05-15 08:31:52.154257] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:05.158 passed 00:20:05.158 Test: blockdev write read 8 blocks ...passed 00:20:05.158 Test: blockdev write read size > 128k ...passed 00:20:05.158 Test: blockdev write read invalid size ...passed 00:20:05.417 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:05.417 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:05.417 Test: blockdev write read max offset ...passed 00:20:05.417 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:05.417 Test: blockdev writev readv 8 blocks ...passed 00:20:05.417 Test: blockdev writev readv 30 x 1block ...passed 00:20:05.417 Test: blockdev writev readv block ...passed 00:20:05.417 Test: blockdev writev readv size > 128k ...passed 00:20:05.417 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:05.417 Test: blockdev comparev and writev ...[2024-05-15 08:31:52.406006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.417 [2024-05-15 08:31:52.406043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.417 [2024-05-15 08:31:52.406057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.417 [2024-05-15 08:31:52.406065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:05.417 [2024-05-15 08:31:52.406316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.417 [2024-05-15 08:31:52.406329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:05.417 [2024-05-15 08:31:52.406340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.417 [2024-05-15 08:31:52.406348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:05.417 [2024-05-15 08:31:52.406597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.417 [2024-05-15 08:31:52.406608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:05.417 [2024-05-15 08:31:52.406619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.417 [2024-05-15 08:31:52.406629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:05.417 [2024-05-15 08:31:52.406862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.417 [2024-05-15 08:31:52.406871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:05.417 [2024-05-15 08:31:52.406883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.417 [2024-05-15 08:31:52.406890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:05.675 passed 00:20:05.675 Test: blockdev nvme passthru rw ...passed 00:20:05.675 Test: blockdev nvme passthru vendor specific ...[2024-05-15 08:31:52.490487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.675 [2024-05-15 08:31:52.490502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:05.675 [2024-05-15 08:31:52.490614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.675 [2024-05-15 08:31:52.490623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:05.676 [2024-05-15 08:31:52.490722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.676 [2024-05-15 08:31:52.490731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:05.676 [2024-05-15 08:31:52.490838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.676 [2024-05-15 08:31:52.490847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:05.676 passed 00:20:05.676 Test: blockdev nvme admin passthru ...passed 00:20:05.676 Test: blockdev copy ...passed 00:20:05.676 00:20:05.676 Run Summary: Type Total Ran Passed Failed Inactive 00:20:05.676 suites 1 1 n/a 0 0 00:20:05.676 tests 23 23 23 0 0 00:20:05.676 asserts 152 152 152 0 n/a 00:20:05.676 00:20:05.676 Elapsed time = 1.369 seconds 00:20:05.933 08:31:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:05.933 08:31:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.933 08:31:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:05.933 08:31:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.933 08:31:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:05.933 08:31:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:20:05.933 08:31:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:05.933 08:31:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:20:05.933 08:31:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:05.933 08:31:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:20:05.934 08:31:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:05.934 08:31:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:05.934 rmmod nvme_tcp 00:20:05.934 rmmod nvme_fabrics 00:20:05.934 rmmod nvme_keyring 00:20:05.934 08:31:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:05.934 08:31:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:20:05.934 08:31:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:20:05.934 08:31:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 310658 ']' 00:20:05.934 08:31:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 310658 00:20:05.934 08:31:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 310658 ']' 00:20:05.934 08:31:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 310658 00:20:05.934 08:31:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:20:05.934 08:31:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:05.934 08:31:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 310658 00:20:05.934 08:31:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:20:05.934 08:31:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:20:05.934 08:31:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 310658' 00:20:05.934 killing process with pid 310658 00:20:05.934 08:31:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 310658 00:20:05.934 [2024-05-15 08:31:52.849575] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:05.934 08:31:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 310658 00:20:06.192 08:31:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:06.192 08:31:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:06.192 08:31:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:06.192 08:31:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:06.192 08:31:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:06.192 08:31:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.192 08:31:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.192 08:31:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:08.896 00:20:08.896 real 0m9.536s 00:20:08.896 user 0m13.988s 00:20:08.896 sys 0m3.973s 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:08.896 ************************************ 00:20:08.896 END TEST nvmf_bdevio 00:20:08.896 ************************************ 00:20:08.896 08:31:55 nvmf_tcp -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:20:08.896 08:31:55 nvmf_tcp -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:08.896 08:31:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:20:08.896 08:31:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:08.896 08:31:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:08.896 ************************************ 00:20:08.896 START TEST nvmf_bdevio_no_huge 00:20:08.896 ************************************ 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:08.896 * Looking for test storage... 00:20:08.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:08.896 08:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:14.376 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:14.376 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:14.376 Found net devices under 0000:86:00.0: cvl_0_0 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:14.376 Found net devices under 0000:86:00.1: cvl_0_1 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.376 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:14.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:20:14.377 00:20:14.377 --- 10.0.0.2 ping statistics --- 00:20:14.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.377 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:20:14.377 00:20:14.377 --- 10.0.0.1 ping statistics --- 00:20:14.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.377 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=314662 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 314662 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 314662 ']' 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:14.377 08:32:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.377 [2024-05-15 08:32:00.743777] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:20:14.377 [2024-05-15 08:32:00.743822] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:14.377 [2024-05-15 08:32:00.806961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:14.377 [2024-05-15 08:32:00.892394] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.377 [2024-05-15 08:32:00.892425] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.377 [2024-05-15 08:32:00.892432] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.377 [2024-05-15 08:32:00.892438] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.377 [2024-05-15 08:32:00.892443] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.377 [2024-05-15 08:32:00.892555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:14.377 [2024-05-15 08:32:00.892661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:14.377 [2024-05-15 08:32:00.892747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.377 [2024-05-15 08:32:00.892748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.637 [2024-05-15 08:32:01.584822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.637 Malloc0 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.637 [2024-05-15 08:32:01.628908] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:14.637 [2024-05-15 08:32:01.629115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:14.637 { 00:20:14.637 "params": { 00:20:14.637 "name": "Nvme$subsystem", 00:20:14.637 "trtype": "$TEST_TRANSPORT", 00:20:14.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.637 "adrfam": "ipv4", 00:20:14.637 "trsvcid": "$NVMF_PORT", 00:20:14.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.637 "hdgst": ${hdgst:-false}, 00:20:14.637 "ddgst": ${ddgst:-false} 00:20:14.637 }, 00:20:14.637 "method": "bdev_nvme_attach_controller" 00:20:14.637 } 00:20:14.637 EOF 00:20:14.637 )") 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:14.637 08:32:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:14.637 "params": { 00:20:14.637 "name": "Nvme1", 00:20:14.637 "trtype": "tcp", 00:20:14.637 "traddr": "10.0.0.2", 00:20:14.637 "adrfam": "ipv4", 00:20:14.637 "trsvcid": "4420", 00:20:14.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.637 "hdgst": false, 00:20:14.637 "ddgst": false 00:20:14.637 }, 00:20:14.637 "method": "bdev_nvme_attach_controller" 00:20:14.637 }' 00:20:14.897 [2024-05-15 08:32:01.677979] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:20:14.897 [2024-05-15 08:32:01.678024] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid314737 ] 00:20:14.897 [2024-05-15 08:32:01.737876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:14.897 [2024-05-15 08:32:01.823964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.897 [2024-05-15 08:32:01.824059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.897 [2024-05-15 08:32:01.824059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.156 I/O targets: 00:20:15.156 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:15.156 00:20:15.156 00:20:15.156 CUnit - A unit testing framework for C - Version 2.1-3 00:20:15.156 http://cunit.sourceforge.net/ 00:20:15.156 00:20:15.156 00:20:15.156 Suite: bdevio tests on: Nvme1n1 00:20:15.156 Test: blockdev write read block ...passed 00:20:15.156 Test: blockdev write zeroes read block ...passed 00:20:15.156 Test: blockdev write zeroes read no split ...passed 00:20:15.156 Test: blockdev write zeroes read split ...passed 00:20:15.414 Test: blockdev write zeroes read split partial ...passed 00:20:15.414 Test: blockdev reset ...[2024-05-15 08:32:02.205477] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:15.414 [2024-05-15 08:32:02.205539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7be5a0 (9): Bad file descriptor 00:20:15.414 [2024-05-15 08:32:02.307453] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:15.414 passed 00:20:15.414 Test: blockdev write read 8 blocks ...passed 00:20:15.414 Test: blockdev write read size > 128k ...passed 00:20:15.414 Test: blockdev write read invalid size ...passed 00:20:15.414 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:15.414 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:15.414 Test: blockdev write read max offset ...passed 00:20:15.414 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:15.414 Test: blockdev writev readv 8 blocks ...passed 00:20:15.414 Test: blockdev writev readv 30 x 1block ...passed 00:20:15.675 Test: blockdev writev readv block ...passed 00:20:15.675 Test: blockdev writev readv size > 128k ...passed 00:20:15.675 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:15.675 Test: blockdev comparev and writev ...[2024-05-15 08:32:02.478672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.675 [2024-05-15 08:32:02.478701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.675 [2024-05-15 08:32:02.478715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.675 [2024-05-15 08:32:02.478723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:15.675 [2024-05-15 08:32:02.478978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.675 [2024-05-15 08:32:02.478990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:15.675 [2024-05-15 08:32:02.479002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.675 [2024-05-15 08:32:02.479010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:15.675 [2024-05-15 08:32:02.479278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.675 [2024-05-15 08:32:02.479289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:15.675 [2024-05-15 08:32:02.479300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.675 [2024-05-15 08:32:02.479308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:15.675 [2024-05-15 08:32:02.479556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.675 [2024-05-15 08:32:02.479565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:15.675 [2024-05-15 08:32:02.479577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.675 [2024-05-15 08:32:02.479584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:15.675 passed 00:20:15.675 Test: blockdev nvme passthru rw ...passed 00:20:15.675 Test: blockdev nvme passthru vendor specific ...[2024-05-15 08:32:02.563525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:15.675 [2024-05-15 08:32:02.563540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:15.675 [2024-05-15 08:32:02.563646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:15.675 [2024-05-15 08:32:02.563655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:15.675 [2024-05-15 08:32:02.563750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:15.675 [2024-05-15 08:32:02.563759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:15.675 [2024-05-15 08:32:02.563862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:15.675 [2024-05-15 08:32:02.563871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:15.675 passed 00:20:15.675 Test: blockdev nvme admin passthru ...passed 00:20:15.675 Test: blockdev copy ...passed 00:20:15.675 00:20:15.675 Run Summary: Type Total Ran Passed Failed Inactive 00:20:15.675 suites 1 1 n/a 0 0 00:20:15.675 tests 23 23 23 0 0 00:20:15.675 asserts 152 152 152 0 n/a 00:20:15.675 00:20:15.675 Elapsed time = 1.223 seconds 00:20:15.934 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.934 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.934 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:15.934 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.934 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:15.934 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:15.934 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:15.934 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:15.934 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:15.934 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:15.934 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:15.934 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:15.934 rmmod nvme_tcp 00:20:15.934 rmmod nvme_fabrics 00:20:15.934 rmmod nvme_keyring 00:20:16.193 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:16.193 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:16.193 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:16.193 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 314662 ']' 00:20:16.193 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 314662 00:20:16.193 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 314662 ']' 00:20:16.193 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 314662 00:20:16.193 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:20:16.193 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:16.193 08:32:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 314662 00:20:16.193 08:32:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:20:16.193 08:32:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:20:16.193 08:32:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 314662' 00:20:16.193 killing process with pid 314662 00:20:16.193 08:32:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 314662 00:20:16.193 [2024-05-15 08:32:03.019292] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:16.193 08:32:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 314662 00:20:16.453 08:32:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:16.453 08:32:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:16.453 08:32:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:16.453 08:32:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:16.453 08:32:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:16.453 08:32:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.453 08:32:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.453 08:32:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.988 08:32:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:18.988 00:20:18.988 real 0m10.212s 00:20:18.988 user 0m13.296s 00:20:18.988 sys 0m4.835s 00:20:18.988 08:32:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:18.988 08:32:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.988 ************************************ 00:20:18.988 END TEST nvmf_bdevio_no_huge 00:20:18.988 ************************************ 00:20:18.988 08:32:05 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:18.988 08:32:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:18.988 08:32:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:18.988 08:32:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:18.988 ************************************ 00:20:18.988 START TEST nvmf_tls 00:20:18.988 ************************************ 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:18.988 * Looking for test storage... 00:20:18.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.988 08:32:05 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:18.989 08:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:23.183 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:23.183 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:23.183 Found net devices under 0000:86:00.0: cvl_0_0 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:23.183 Found net devices under 0000:86:00.1: cvl_0_1 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:23.183 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:23.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:20:23.443 00:20:23.443 --- 10.0.0.2 ping statistics --- 00:20:23.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.443 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:23.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:20:23.443 00:20:23.443 --- 10.0.0.1 ping statistics --- 00:20:23.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.443 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=318440 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 318440 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 318440 ']' 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:23.443 08:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.443 [2024-05-15 08:32:10.386270] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:20:23.443 [2024-05-15 08:32:10.386313] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.443 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.443 [2024-05-15 08:32:10.440566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.702 [2024-05-15 08:32:10.519627] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.702 [2024-05-15 08:32:10.519662] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.702 [2024-05-15 08:32:10.519670] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.702 [2024-05-15 08:32:10.519676] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.702 [2024-05-15 08:32:10.519680] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.702 [2024-05-15 08:32:10.519715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.268 08:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:24.268 08:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:24.268 08:32:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:24.268 08:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:24.268 08:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.268 08:32:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.268 08:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:24.268 08:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:24.632 true 00:20:24.632 08:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:24.632 08:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:24.632 08:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:24.632 08:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:24.632 08:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:24.890 08:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:24.890 08:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:24.890 08:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:24.890 08:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:24.890 08:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:25.148 08:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:25.148 08:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:25.406 08:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:25.406 08:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:25.406 08:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:25.406 08:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:25.406 08:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:25.406 08:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:25.406 08:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:25.666 08:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:25.666 08:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:25.925 08:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:25.925 08:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:25.925 08:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:25.925 08:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:25.925 08:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.uQVgHq5ybJ 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.9Tf2g3sEXs 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.uQVgHq5ybJ 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.9Tf2g3sEXs 00:20:26.184 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:26.442 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:26.701 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.uQVgHq5ybJ 00:20:26.701 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.uQVgHq5ybJ 00:20:26.701 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:26.701 [2024-05-15 08:32:13.672775] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.701 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:26.960 08:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:27.219 [2024-05-15 08:32:14.009625] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:27.219 [2024-05-15 08:32:14.009671] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:27.219 [2024-05-15 08:32:14.009827] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.219 08:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:27.219 malloc0 00:20:27.219 08:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:27.478 08:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uQVgHq5ybJ 00:20:27.478 [2024-05-15 08:32:14.490949] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:27.737 08:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.uQVgHq5ybJ 00:20:27.737 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.724 Initializing NVMe Controllers 00:20:37.724 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:37.724 Initialization complete. Launching workers. 00:20:37.724 ======================================================== 00:20:37.724 Latency(us) 00:20:37.724 Device Information : IOPS MiB/s Average min max 00:20:37.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16672.94 65.13 3838.94 803.57 4623.31 00:20:37.724 ======================================================== 00:20:37.724 Total : 16672.94 65.13 3838.94 803.57 4623.31 00:20:37.724 00:20:37.724 08:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uQVgHq5ybJ 00:20:37.724 08:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:37.724 08:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:37.724 08:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:37.724 08:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uQVgHq5ybJ' 00:20:37.724 08:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.724 08:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:37.724 08:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=320789 00:20:37.724 08:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.724 08:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 320789 /var/tmp/bdevperf.sock 00:20:37.724 08:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 320789 ']' 00:20:37.724 08:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.724 08:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:37.724 08:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.724 08:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:37.724 08:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.724 [2024-05-15 08:32:24.630187] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:20:37.724 [2024-05-15 08:32:24.630233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320789 ] 00:20:37.724 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.724 [2024-05-15 08:32:24.679611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.984 [2024-05-15 08:32:24.759292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.984 08:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:37.984 08:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:37.984 08:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uQVgHq5ybJ 00:20:37.984 [2024-05-15 08:32:24.995987] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.984 [2024-05-15 08:32:24.996064] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:38.243 TLSTESTn1 00:20:38.243 08:32:25 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:38.243 Running I/O for 10 seconds... 00:20:48.225 00:20:48.225 Latency(us) 00:20:48.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.225 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:48.225 Verification LBA range: start 0x0 length 0x2000 00:20:48.225 TLSTESTn1 : 10.02 5071.79 19.81 0.00 0.00 25197.10 5014.93 64738.17 00:20:48.225 =================================================================================================================== 00:20:48.225 Total : 5071.79 19.81 0.00 0.00 25197.10 5014.93 64738.17 00:20:48.225 0 00:20:48.225 08:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:48.225 08:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 320789 00:20:48.225 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 320789 ']' 00:20:48.225 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 320789 00:20:48.225 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:48.225 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:48.225 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 320789 00:20:48.484 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:48.484 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:48.484 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 320789' 00:20:48.484 killing process with pid 320789 00:20:48.484 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 320789 00:20:48.484 Received shutdown signal, test time was about 10.000000 seconds 00:20:48.484 00:20:48.484 Latency(us) 00:20:48.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.485 =================================================================================================================== 00:20:48.485 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:48.485 [2024-05-15 08:32:35.272795] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 320789 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9Tf2g3sEXs 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9Tf2g3sEXs 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9Tf2g3sEXs 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9Tf2g3sEXs' 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=322595 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 322595 /var/tmp/bdevperf.sock 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 322595 ']' 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:48.485 08:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.744 [2024-05-15 08:32:35.530676] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:20:48.744 [2024-05-15 08:32:35.530728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322595 ] 00:20:48.744 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.744 [2024-05-15 08:32:35.580769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.744 [2024-05-15 08:32:35.649998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.311 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:49.311 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:49.311 08:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9Tf2g3sEXs 00:20:49.571 [2024-05-15 08:32:36.472176] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:49.571 [2024-05-15 08:32:36.472250] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:49.571 [2024-05-15 08:32:36.478068] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:49.571 [2024-05-15 08:32:36.478572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46490 (107): Transport endpoint is not connected 00:20:49.571 [2024-05-15 08:32:36.479565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46490 (9): Bad file descriptor 00:20:49.571 [2024-05-15 08:32:36.480566] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:49.571 [2024-05-15 08:32:36.480580] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:49.571 [2024-05-15 08:32:36.480588] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:49.571 request: 00:20:49.571 { 00:20:49.571 "name": "TLSTEST", 00:20:49.571 "trtype": "tcp", 00:20:49.571 "traddr": "10.0.0.2", 00:20:49.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:49.571 "adrfam": "ipv4", 00:20:49.571 "trsvcid": "4420", 00:20:49.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.571 "psk": "/tmp/tmp.9Tf2g3sEXs", 00:20:49.571 "method": "bdev_nvme_attach_controller", 00:20:49.571 "req_id": 1 00:20:49.571 } 00:20:49.571 Got JSON-RPC error response 00:20:49.571 response: 00:20:49.571 { 00:20:49.571 "code": -32602, 00:20:49.571 "message": "Invalid parameters" 00:20:49.571 } 00:20:49.571 08:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 322595 00:20:49.571 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 322595 ']' 00:20:49.571 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 322595 00:20:49.571 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:49.571 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:49.571 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 322595 00:20:49.571 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:49.571 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:49.571 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 322595' 00:20:49.571 killing process with pid 322595 00:20:49.571 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 322595 00:20:49.571 Received shutdown signal, test time was about 10.000000 seconds 00:20:49.571 00:20:49.571 Latency(us) 00:20:49.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.571 =================================================================================================================== 00:20:49.571 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:49.571 [2024-05-15 08:32:36.546280] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:49.571 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 322595 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uQVgHq5ybJ 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uQVgHq5ybJ 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uQVgHq5ybJ 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uQVgHq5ybJ' 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=322754 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 322754 /var/tmp/bdevperf.sock 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 322754 ']' 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:49.832 08:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.832 [2024-05-15 08:32:36.795444] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:20:49.832 [2024-05-15 08:32:36.795491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322754 ] 00:20:49.832 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.832 [2024-05-15 08:32:36.846870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.092 [2024-05-15 08:32:36.921274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.661 08:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:50.661 08:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:50.661 08:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.uQVgHq5ybJ 00:20:50.920 [2024-05-15 08:32:37.755500] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:50.920 [2024-05-15 08:32:37.755570] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:50.920 [2024-05-15 08:32:37.761258] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:50.920 [2024-05-15 08:32:37.761280] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:50.920 [2024-05-15 08:32:37.761302] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:50.920 [2024-05-15 08:32:37.761674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d3490 (107): Transport endpoint is not connected 00:20:50.920 [2024-05-15 08:32:37.762668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d3490 (9): Bad file descriptor 00:20:50.920 [2024-05-15 08:32:37.763669] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:50.920 [2024-05-15 08:32:37.763680] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:50.920 [2024-05-15 08:32:37.763689] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:50.920 request: 00:20:50.920 { 00:20:50.920 "name": "TLSTEST", 00:20:50.920 "trtype": "tcp", 00:20:50.920 "traddr": "10.0.0.2", 00:20:50.920 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:50.920 "adrfam": "ipv4", 00:20:50.920 "trsvcid": "4420", 00:20:50.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.920 "psk": "/tmp/tmp.uQVgHq5ybJ", 00:20:50.920 "method": "bdev_nvme_attach_controller", 00:20:50.920 "req_id": 1 00:20:50.920 } 00:20:50.920 Got JSON-RPC error response 00:20:50.920 response: 00:20:50.920 { 00:20:50.920 "code": -32602, 00:20:50.920 "message": "Invalid parameters" 00:20:50.920 } 00:20:50.920 08:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 322754 00:20:50.920 08:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 322754 ']' 00:20:50.920 08:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 322754 00:20:50.920 08:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:50.920 08:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:50.920 08:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 322754 00:20:50.920 08:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:50.920 08:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:50.920 08:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 322754' 00:20:50.920 killing process with pid 322754 00:20:50.920 08:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 322754 00:20:50.920 Received shutdown signal, test time was about 10.000000 seconds 00:20:50.920 00:20:50.920 Latency(us) 00:20:50.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.920 =================================================================================================================== 00:20:50.920 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:50.920 [2024-05-15 08:32:37.828572] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:50.920 08:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 322754 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uQVgHq5ybJ 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uQVgHq5ybJ 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uQVgHq5ybJ 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uQVgHq5ybJ' 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=322920 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 322920 /var/tmp/bdevperf.sock 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 322920 ']' 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:51.180 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.180 [2024-05-15 08:32:38.077685] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:20:51.180 [2024-05-15 08:32:38.077730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322920 ] 00:20:51.180 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.180 [2024-05-15 08:32:38.128191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.180 [2024-05-15 08:32:38.194715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.440 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:51.440 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:51.440 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uQVgHq5ybJ 00:20:51.440 [2024-05-15 08:32:38.435927] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:51.440 [2024-05-15 08:32:38.435993] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:51.440 [2024-05-15 08:32:38.442995] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:51.440 [2024-05-15 08:32:38.443015] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:51.440 [2024-05-15 08:32:38.443037] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:51.440 [2024-05-15 08:32:38.443295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd30490 (107): Transport endpoint is not connected 00:20:51.440 [2024-05-15 08:32:38.444288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd30490 (9): Bad file descriptor 00:20:51.440 [2024-05-15 08:32:38.445289] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:51.440 [2024-05-15 08:32:38.445298] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:51.440 [2024-05-15 08:32:38.445307] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:51.440 request: 00:20:51.440 { 00:20:51.440 "name": "TLSTEST", 00:20:51.440 "trtype": "tcp", 00:20:51.440 "traddr": "10.0.0.2", 00:20:51.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:51.440 "adrfam": "ipv4", 00:20:51.440 "trsvcid": "4420", 00:20:51.440 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:51.440 "psk": "/tmp/tmp.uQVgHq5ybJ", 00:20:51.440 "method": "bdev_nvme_attach_controller", 00:20:51.440 "req_id": 1 00:20:51.440 } 00:20:51.440 Got JSON-RPC error response 00:20:51.440 response: 00:20:51.440 { 00:20:51.440 "code": -32602, 00:20:51.440 "message": "Invalid parameters" 00:20:51.440 } 00:20:51.699 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 322920 00:20:51.699 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 322920 ']' 00:20:51.699 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 322920 00:20:51.699 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:51.699 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:51.699 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 322920 00:20:51.699 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:51.699 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 322920' 00:20:51.700 killing process with pid 322920 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 322920 00:20:51.700 Received shutdown signal, test time was about 10.000000 seconds 00:20:51.700 00:20:51.700 Latency(us) 00:20:51.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.700 =================================================================================================================== 00:20:51.700 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:51.700 [2024-05-15 08:32:38.511299] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 322920 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=323115 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 323115 /var/tmp/bdevperf.sock 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 323115 ']' 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:51.700 08:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.959 [2024-05-15 08:32:38.759250] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:20:51.959 [2024-05-15 08:32:38.759295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid323115 ] 00:20:51.959 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.959 [2024-05-15 08:32:38.808844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.959 [2024-05-15 08:32:38.882381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.897 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:52.897 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:52.897 08:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:52.897 [2024-05-15 08:32:39.726301] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:52.897 [2024-05-15 08:32:39.728452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd7b30 (9): Bad file descriptor 00:20:52.897 [2024-05-15 08:32:39.729449] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:52.897 [2024-05-15 08:32:39.729459] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:52.897 [2024-05-15 08:32:39.729468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:52.897 request: 00:20:52.897 { 00:20:52.897 "name": "TLSTEST", 00:20:52.897 "trtype": "tcp", 00:20:52.897 "traddr": "10.0.0.2", 00:20:52.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:52.897 "adrfam": "ipv4", 00:20:52.897 "trsvcid": "4420", 00:20:52.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.897 "method": "bdev_nvme_attach_controller", 00:20:52.897 "req_id": 1 00:20:52.897 } 00:20:52.897 Got JSON-RPC error response 00:20:52.897 response: 00:20:52.897 { 00:20:52.897 "code": -32602, 00:20:52.897 "message": "Invalid parameters" 00:20:52.897 } 00:20:52.897 08:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 323115 00:20:52.897 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 323115 ']' 00:20:52.897 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 323115 00:20:52.897 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:52.897 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:52.897 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 323115 00:20:52.897 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:52.897 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:52.897 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 323115' 00:20:52.897 killing process with pid 323115 00:20:52.897 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 323115 00:20:52.897 Received shutdown signal, test time was about 10.000000 seconds 00:20:52.897 00:20:52.897 Latency(us) 00:20:52.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.897 =================================================================================================================== 00:20:52.897 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:52.897 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 323115 00:20:53.157 08:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:53.157 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:53.157 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:53.157 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:53.157 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:53.157 08:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 318440 00:20:53.157 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 318440 ']' 00:20:53.157 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 318440 00:20:53.157 08:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:53.157 08:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:53.157 08:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 318440 00:20:53.157 08:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:53.157 08:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:53.157 08:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 318440' 00:20:53.157 killing process with pid 318440 00:20:53.157 08:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 318440 00:20:53.157 [2024-05-15 08:32:40.045866] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:53.157 [2024-05-15 08:32:40.045892] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:53.157 08:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 318440 00:20:53.416 08:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.KTYPvbw9bK 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.KTYPvbw9bK 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=323363 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 323363 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 323363 ']' 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:53.417 08:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.417 [2024-05-15 08:32:40.357509] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:20:53.417 [2024-05-15 08:32:40.357555] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.417 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.417 [2024-05-15 08:32:40.412753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.676 [2024-05-15 08:32:40.494395] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.676 [2024-05-15 08:32:40.494430] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.676 [2024-05-15 08:32:40.494440] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.676 [2024-05-15 08:32:40.494446] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.676 [2024-05-15 08:32:40.494451] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.676 [2024-05-15 08:32:40.494472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.243 08:32:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:54.243 08:32:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:54.244 08:32:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:54.244 08:32:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:54.244 08:32:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.244 08:32:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.244 08:32:41 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.KTYPvbw9bK 00:20:54.244 08:32:41 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.KTYPvbw9bK 00:20:54.244 08:32:41 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:54.503 [2024-05-15 08:32:41.353263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.503 08:32:41 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:54.763 08:32:41 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:54.763 [2024-05-15 08:32:41.678075] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:54.763 [2024-05-15 08:32:41.678118] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:54.763 [2024-05-15 08:32:41.678292] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.763 08:32:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:55.022 malloc0 00:20:55.022 08:32:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:55.022 08:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KTYPvbw9bK 00:20:55.281 [2024-05-15 08:32:42.191579] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:55.281 08:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KTYPvbw9bK 00:20:55.281 08:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:55.281 08:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:55.281 08:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:55.281 08:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.KTYPvbw9bK' 00:20:55.281 08:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:55.281 08:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:55.281 08:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=323685 00:20:55.281 08:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:55.281 08:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 323685 /var/tmp/bdevperf.sock 00:20:55.281 08:32:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 323685 ']' 00:20:55.281 08:32:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:55.281 08:32:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:55.281 08:32:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:55.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:55.281 08:32:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:55.281 08:32:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.281 [2024-05-15 08:32:42.237020] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:20:55.281 [2024-05-15 08:32:42.237065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid323685 ] 00:20:55.281 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.281 [2024-05-15 08:32:42.288974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.540 [2024-05-15 08:32:42.363354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.540 08:32:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:55.540 08:32:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:55.540 08:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KTYPvbw9bK 00:20:55.800 [2024-05-15 08:32:42.607381] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:55.800 [2024-05-15 08:32:42.607467] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:55.800 TLSTESTn1 00:20:55.800 08:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:55.800 Running I/O for 10 seconds... 00:21:08.019 00:21:08.019 Latency(us) 00:21:08.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.019 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:08.019 Verification LBA range: start 0x0 length 0x2000 00:21:08.019 TLSTESTn1 : 10.01 5098.09 19.91 0.00 0.00 25071.58 5271.37 32369.09 00:21:08.019 =================================================================================================================== 00:21:08.019 Total : 5098.09 19.91 0.00 0.00 25071.58 5271.37 32369.09 00:21:08.019 0 00:21:08.019 08:32:52 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:08.019 08:32:52 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 323685 00:21:08.019 08:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 323685 ']' 00:21:08.019 08:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 323685 00:21:08.019 08:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:08.019 08:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:08.019 08:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 323685 00:21:08.019 08:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:08.019 08:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:08.019 08:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 323685' 00:21:08.019 killing process with pid 323685 00:21:08.019 08:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 323685 00:21:08.019 Received shutdown signal, test time was about 10.000000 seconds 00:21:08.019 00:21:08.019 Latency(us) 00:21:08.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.019 =================================================================================================================== 00:21:08.019 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:08.019 [2024-05-15 08:32:52.875605] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:08.019 08:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 323685 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.KTYPvbw9bK 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KTYPvbw9bK 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KTYPvbw9bK 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KTYPvbw9bK 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.KTYPvbw9bK' 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=325457 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 325457 /var/tmp/bdevperf.sock 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 325457 ']' 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:08.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.019 [2024-05-15 08:32:53.133368] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:21:08.019 [2024-05-15 08:32:53.133417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325457 ] 00:21:08.019 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.019 [2024-05-15 08:32:53.183857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.019 [2024-05-15 08:32:53.250378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:08.019 08:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KTYPvbw9bK 00:21:08.019 [2024-05-15 08:32:54.083730] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:08.019 [2024-05-15 08:32:54.083780] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:08.019 [2024-05-15 08:32:54.083787] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.KTYPvbw9bK 00:21:08.019 request: 00:21:08.019 { 00:21:08.019 "name": "TLSTEST", 00:21:08.019 "trtype": "tcp", 00:21:08.019 "traddr": "10.0.0.2", 00:21:08.019 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:08.019 "adrfam": "ipv4", 00:21:08.019 "trsvcid": "4420", 00:21:08.019 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.019 "psk": "/tmp/tmp.KTYPvbw9bK", 00:21:08.019 "method": "bdev_nvme_attach_controller", 00:21:08.019 "req_id": 1 00:21:08.019 } 00:21:08.019 Got JSON-RPC error response 00:21:08.019 response: 00:21:08.019 { 00:21:08.019 "code": -1, 00:21:08.019 "message": "Operation not permitted" 00:21:08.019 } 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 325457 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 325457 ']' 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 325457 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 325457 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 325457' 00:21:08.019 killing process with pid 325457 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 325457 00:21:08.019 Received shutdown signal, test time was about 10.000000 seconds 00:21:08.019 00:21:08.019 Latency(us) 00:21:08.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.019 =================================================================================================================== 00:21:08.019 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 325457 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 323363 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 323363 ']' 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 323363 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 323363 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 323363' 00:21:08.019 killing process with pid 323363 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 323363 00:21:08.019 [2024-05-15 08:32:54.392488] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:08.019 [2024-05-15 08:32:54.392526] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:08.019 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 323363 00:21:08.020 08:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:08.020 08:32:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:08.020 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:08.020 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.020 08:32:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=325699 00:21:08.020 08:32:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:08.020 08:32:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 325699 00:21:08.020 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 325699 ']' 00:21:08.020 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.020 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:08.020 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.020 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:08.020 08:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.020 [2024-05-15 08:32:54.646250] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:21:08.020 [2024-05-15 08:32:54.646297] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.020 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.020 [2024-05-15 08:32:54.700768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.020 [2024-05-15 08:32:54.772986] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.020 [2024-05-15 08:32:54.773019] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.020 [2024-05-15 08:32:54.773026] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.020 [2024-05-15 08:32:54.773032] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.020 [2024-05-15 08:32:54.773037] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.020 [2024-05-15 08:32:54.773070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.589 08:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:08.589 08:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:08.589 08:32:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:08.589 08:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:08.589 08:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.589 08:32:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.589 08:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.KTYPvbw9bK 00:21:08.589 08:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:08.590 08:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.KTYPvbw9bK 00:21:08.590 08:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:08.590 08:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:08.590 08:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:08.590 08:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:08.590 08:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.KTYPvbw9bK 00:21:08.590 08:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.KTYPvbw9bK 00:21:08.590 08:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:08.849 [2024-05-15 08:32:55.639501] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.849 08:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:08.849 08:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:09.109 [2024-05-15 08:32:55.980360] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:09.109 [2024-05-15 08:32:55.980404] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:09.109 [2024-05-15 08:32:55.980557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.109 08:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:09.368 malloc0 00:21:09.368 08:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:09.368 08:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KTYPvbw9bK 00:21:09.628 [2024-05-15 08:32:56.489881] tcp.c:3572:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:09.628 [2024-05-15 08:32:56.489902] tcp.c:3658:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:09.628 [2024-05-15 08:32:56.489927] subsystem.c:1030:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:09.628 request: 00:21:09.628 { 00:21:09.628 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.628 "host": "nqn.2016-06.io.spdk:host1", 00:21:09.628 "psk": "/tmp/tmp.KTYPvbw9bK", 00:21:09.628 "method": "nvmf_subsystem_add_host", 00:21:09.628 "req_id": 1 00:21:09.628 } 00:21:09.628 Got JSON-RPC error response 00:21:09.628 response: 00:21:09.628 { 00:21:09.628 "code": -32603, 00:21:09.628 "message": "Internal error" 00:21:09.628 } 00:21:09.628 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:09.628 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:09.628 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:09.628 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:09.628 08:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 325699 00:21:09.628 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 325699 ']' 00:21:09.628 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 325699 00:21:09.628 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:09.628 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:09.628 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 325699 00:21:09.628 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:09.628 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:09.628 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 325699' 00:21:09.628 killing process with pid 325699 00:21:09.628 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 325699 00:21:09.628 [2024-05-15 08:32:56.554894] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:09.628 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 325699 00:21:09.888 08:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.KTYPvbw9bK 00:21:09.888 08:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:09.888 08:32:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:09.888 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:09.888 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.888 08:32:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=326181 00:21:09.888 08:32:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:09.888 08:32:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 326181 00:21:09.888 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 326181 ']' 00:21:09.888 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.888 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:09.888 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.888 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:09.888 08:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.888 [2024-05-15 08:32:56.825101] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:21:09.888 [2024-05-15 08:32:56.825148] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.888 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.888 [2024-05-15 08:32:56.882251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.147 [2024-05-15 08:32:56.961253] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.147 [2024-05-15 08:32:56.961292] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.147 [2024-05-15 08:32:56.961298] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.148 [2024-05-15 08:32:56.961304] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.148 [2024-05-15 08:32:56.961308] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.148 [2024-05-15 08:32:56.961345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.716 08:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:10.716 08:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:10.716 08:32:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:10.716 08:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.716 08:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.716 08:32:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.716 08:32:57 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.KTYPvbw9bK 00:21:10.716 08:32:57 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.KTYPvbw9bK 00:21:10.716 08:32:57 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:10.976 [2024-05-15 08:32:57.820321] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.976 08:32:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:11.235 08:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:11.235 [2024-05-15 08:32:58.165193] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:11.235 [2024-05-15 08:32:58.165249] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:11.235 [2024-05-15 08:32:58.165415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.235 08:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:11.495 malloc0 00:21:11.495 08:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:11.754 08:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KTYPvbw9bK 00:21:11.754 [2024-05-15 08:32:58.674792] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:11.754 08:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=326445 00:21:11.754 08:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:11.754 08:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:11.754 08:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 326445 /var/tmp/bdevperf.sock 00:21:11.754 08:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 326445 ']' 00:21:11.754 08:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.754 08:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:11.754 08:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.754 08:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:11.754 08:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.754 [2024-05-15 08:32:58.735480] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:21:11.754 [2024-05-15 08:32:58.735523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326445 ] 00:21:11.754 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.014 [2024-05-15 08:32:58.785465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.014 [2024-05-15 08:32:58.857261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.582 08:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:12.582 08:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:12.582 08:32:59 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KTYPvbw9bK 00:21:12.841 [2024-05-15 08:32:59.671834] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:12.841 [2024-05-15 08:32:59.671922] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:12.841 TLSTESTn1 00:21:12.841 08:32:59 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:13.101 08:33:00 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:13.101 "subsystems": [ 00:21:13.101 { 00:21:13.101 "subsystem": "keyring", 00:21:13.101 "config": [] 00:21:13.101 }, 00:21:13.101 { 00:21:13.101 "subsystem": "iobuf", 00:21:13.101 "config": [ 00:21:13.101 { 00:21:13.101 "method": "iobuf_set_options", 00:21:13.101 "params": { 00:21:13.101 "small_pool_count": 8192, 00:21:13.101 "large_pool_count": 1024, 00:21:13.101 "small_bufsize": 8192, 00:21:13.101 "large_bufsize": 135168 00:21:13.101 } 00:21:13.101 } 00:21:13.101 ] 00:21:13.101 }, 00:21:13.101 { 00:21:13.101 "subsystem": "sock", 00:21:13.101 "config": [ 00:21:13.101 { 00:21:13.101 "method": "sock_impl_set_options", 00:21:13.101 "params": { 00:21:13.101 "impl_name": "posix", 00:21:13.101 "recv_buf_size": 2097152, 00:21:13.101 "send_buf_size": 2097152, 00:21:13.101 "enable_recv_pipe": true, 00:21:13.101 "enable_quickack": false, 00:21:13.101 "enable_placement_id": 0, 00:21:13.101 "enable_zerocopy_send_server": true, 00:21:13.101 "enable_zerocopy_send_client": false, 00:21:13.101 "zerocopy_threshold": 0, 00:21:13.101 "tls_version": 0, 00:21:13.101 "enable_ktls": false 00:21:13.101 } 00:21:13.101 }, 00:21:13.101 { 00:21:13.101 "method": "sock_impl_set_options", 00:21:13.101 "params": { 00:21:13.101 "impl_name": "ssl", 00:21:13.101 "recv_buf_size": 4096, 00:21:13.101 "send_buf_size": 4096, 00:21:13.101 "enable_recv_pipe": true, 00:21:13.101 "enable_quickack": false, 00:21:13.101 "enable_placement_id": 0, 00:21:13.101 "enable_zerocopy_send_server": true, 00:21:13.101 "enable_zerocopy_send_client": false, 00:21:13.101 "zerocopy_threshold": 0, 00:21:13.101 "tls_version": 0, 00:21:13.101 "enable_ktls": false 00:21:13.101 } 00:21:13.101 } 00:21:13.101 ] 00:21:13.101 }, 00:21:13.101 { 00:21:13.101 "subsystem": "vmd", 00:21:13.101 "config": [] 00:21:13.101 }, 00:21:13.101 { 00:21:13.101 "subsystem": "accel", 00:21:13.101 "config": [ 00:21:13.101 { 00:21:13.101 "method": "accel_set_options", 00:21:13.101 "params": { 00:21:13.101 "small_cache_size": 128, 00:21:13.101 "large_cache_size": 16, 00:21:13.101 "task_count": 2048, 00:21:13.101 "sequence_count": 2048, 00:21:13.101 "buf_count": 2048 00:21:13.101 } 00:21:13.101 } 00:21:13.101 ] 00:21:13.101 }, 00:21:13.101 { 00:21:13.101 "subsystem": "bdev", 00:21:13.101 "config": [ 00:21:13.101 { 00:21:13.101 "method": "bdev_set_options", 00:21:13.101 "params": { 00:21:13.101 "bdev_io_pool_size": 65535, 00:21:13.101 "bdev_io_cache_size": 256, 00:21:13.101 "bdev_auto_examine": true, 00:21:13.101 "iobuf_small_cache_size": 128, 00:21:13.101 "iobuf_large_cache_size": 16 00:21:13.101 } 00:21:13.101 }, 00:21:13.101 { 00:21:13.101 "method": "bdev_raid_set_options", 00:21:13.101 "params": { 00:21:13.101 "process_window_size_kb": 1024 00:21:13.101 } 00:21:13.101 }, 00:21:13.101 { 00:21:13.101 "method": "bdev_iscsi_set_options", 00:21:13.101 "params": { 00:21:13.101 "timeout_sec": 30 00:21:13.101 } 00:21:13.101 }, 00:21:13.101 { 00:21:13.101 "method": "bdev_nvme_set_options", 00:21:13.101 "params": { 00:21:13.101 "action_on_timeout": "none", 00:21:13.101 "timeout_us": 0, 00:21:13.101 "timeout_admin_us": 0, 00:21:13.101 "keep_alive_timeout_ms": 10000, 00:21:13.101 "arbitration_burst": 0, 00:21:13.101 "low_priority_weight": 0, 00:21:13.101 "medium_priority_weight": 0, 00:21:13.101 "high_priority_weight": 0, 00:21:13.101 "nvme_adminq_poll_period_us": 10000, 00:21:13.101 "nvme_ioq_poll_period_us": 0, 00:21:13.101 "io_queue_requests": 0, 00:21:13.101 "delay_cmd_submit": true, 00:21:13.101 "transport_retry_count": 4, 00:21:13.101 "bdev_retry_count": 3, 00:21:13.101 "transport_ack_timeout": 0, 00:21:13.101 "ctrlr_loss_timeout_sec": 0, 00:21:13.101 "reconnect_delay_sec": 0, 00:21:13.101 "fast_io_fail_timeout_sec": 0, 00:21:13.101 "disable_auto_failback": false, 00:21:13.101 "generate_uuids": false, 00:21:13.101 "transport_tos": 0, 00:21:13.101 "nvme_error_stat": false, 00:21:13.101 "rdma_srq_size": 0, 00:21:13.101 "io_path_stat": false, 00:21:13.101 "allow_accel_sequence": false, 00:21:13.101 "rdma_max_cq_size": 0, 00:21:13.101 "rdma_cm_event_timeout_ms": 0, 00:21:13.101 "dhchap_digests": [ 00:21:13.101 "sha256", 00:21:13.101 "sha384", 00:21:13.101 "sha512" 00:21:13.101 ], 00:21:13.101 "dhchap_dhgroups": [ 00:21:13.101 "null", 00:21:13.101 "ffdhe2048", 00:21:13.101 "ffdhe3072", 00:21:13.101 "ffdhe4096", 00:21:13.101 "ffdhe6144", 00:21:13.101 "ffdhe8192" 00:21:13.101 ] 00:21:13.101 } 00:21:13.101 }, 00:21:13.101 { 00:21:13.101 "method": "bdev_nvme_set_hotplug", 00:21:13.101 "params": { 00:21:13.101 "period_us": 100000, 00:21:13.101 "enable": false 00:21:13.101 } 00:21:13.101 }, 00:21:13.101 { 00:21:13.101 "method": "bdev_malloc_create", 00:21:13.101 "params": { 00:21:13.101 "name": "malloc0", 00:21:13.101 "num_blocks": 8192, 00:21:13.101 "block_size": 4096, 00:21:13.101 "physical_block_size": 4096, 00:21:13.101 "uuid": "036299b0-3487-425c-b9a7-afe904812414", 00:21:13.101 "optimal_io_boundary": 0 00:21:13.101 } 00:21:13.101 }, 00:21:13.101 { 00:21:13.101 "method": "bdev_wait_for_examine" 00:21:13.101 } 00:21:13.101 ] 00:21:13.101 }, 00:21:13.101 { 00:21:13.101 "subsystem": "nbd", 00:21:13.101 "config": [] 00:21:13.101 }, 00:21:13.101 { 00:21:13.101 "subsystem": "scheduler", 00:21:13.101 "config": [ 00:21:13.101 { 00:21:13.101 "method": "framework_set_scheduler", 00:21:13.101 "params": { 00:21:13.101 "name": "static" 00:21:13.101 } 00:21:13.101 } 00:21:13.101 ] 00:21:13.101 }, 00:21:13.101 { 00:21:13.101 "subsystem": "nvmf", 00:21:13.101 "config": [ 00:21:13.101 { 00:21:13.101 "method": "nvmf_set_config", 00:21:13.101 "params": { 00:21:13.101 "discovery_filter": "match_any", 00:21:13.101 "admin_cmd_passthru": { 00:21:13.101 "identify_ctrlr": false 00:21:13.101 } 00:21:13.102 } 00:21:13.102 }, 00:21:13.102 { 00:21:13.102 "method": "nvmf_set_max_subsystems", 00:21:13.102 "params": { 00:21:13.102 "max_subsystems": 1024 00:21:13.102 } 00:21:13.102 }, 00:21:13.102 { 00:21:13.102 "method": "nvmf_set_crdt", 00:21:13.102 "params": { 00:21:13.102 "crdt1": 0, 00:21:13.102 "crdt2": 0, 00:21:13.102 "crdt3": 0 00:21:13.102 } 00:21:13.102 }, 00:21:13.102 { 00:21:13.102 "method": "nvmf_create_transport", 00:21:13.102 "params": { 00:21:13.102 "trtype": "TCP", 00:21:13.102 "max_queue_depth": 128, 00:21:13.102 "max_io_qpairs_per_ctrlr": 127, 00:21:13.102 "in_capsule_data_size": 4096, 00:21:13.102 "max_io_size": 131072, 00:21:13.102 "io_unit_size": 131072, 00:21:13.102 "max_aq_depth": 128, 00:21:13.102 "num_shared_buffers": 511, 00:21:13.102 "buf_cache_size": 4294967295, 00:21:13.102 "dif_insert_or_strip": false, 00:21:13.102 "zcopy": false, 00:21:13.102 "c2h_success": false, 00:21:13.102 "sock_priority": 0, 00:21:13.102 "abort_timeout_sec": 1, 00:21:13.102 "ack_timeout": 0, 00:21:13.102 "data_wr_pool_size": 0 00:21:13.102 } 00:21:13.102 }, 00:21:13.102 { 00:21:13.102 "method": "nvmf_create_subsystem", 00:21:13.102 "params": { 00:21:13.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.102 "allow_any_host": false, 00:21:13.102 "serial_number": "SPDK00000000000001", 00:21:13.102 "model_number": "SPDK bdev Controller", 00:21:13.102 "max_namespaces": 10, 00:21:13.102 "min_cntlid": 1, 00:21:13.102 "max_cntlid": 65519, 00:21:13.102 "ana_reporting": false 00:21:13.102 } 00:21:13.102 }, 00:21:13.102 { 00:21:13.102 "method": "nvmf_subsystem_add_host", 00:21:13.102 "params": { 00:21:13.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.102 "host": "nqn.2016-06.io.spdk:host1", 00:21:13.102 "psk": "/tmp/tmp.KTYPvbw9bK" 00:21:13.102 } 00:21:13.102 }, 00:21:13.102 { 00:21:13.102 "method": "nvmf_subsystem_add_ns", 00:21:13.102 "params": { 00:21:13.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.102 "namespace": { 00:21:13.102 "nsid": 1, 00:21:13.102 "bdev_name": "malloc0", 00:21:13.102 "nguid": "036299B03487425CB9A7AFE904812414", 00:21:13.102 "uuid": "036299b0-3487-425c-b9a7-afe904812414", 00:21:13.102 "no_auto_visible": false 00:21:13.102 } 00:21:13.102 } 00:21:13.102 }, 00:21:13.102 { 00:21:13.102 "method": "nvmf_subsystem_add_listener", 00:21:13.102 "params": { 00:21:13.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.102 "listen_address": { 00:21:13.102 "trtype": "TCP", 00:21:13.102 "adrfam": "IPv4", 00:21:13.102 "traddr": "10.0.0.2", 00:21:13.102 "trsvcid": "4420" 00:21:13.102 }, 00:21:13.102 "secure_channel": true 00:21:13.102 } 00:21:13.102 } 00:21:13.102 ] 00:21:13.102 } 00:21:13.102 ] 00:21:13.102 }' 00:21:13.102 08:33:00 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:13.362 08:33:00 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:13.362 "subsystems": [ 00:21:13.362 { 00:21:13.362 "subsystem": "keyring", 00:21:13.362 "config": [] 00:21:13.362 }, 00:21:13.362 { 00:21:13.362 "subsystem": "iobuf", 00:21:13.362 "config": [ 00:21:13.362 { 00:21:13.362 "method": "iobuf_set_options", 00:21:13.362 "params": { 00:21:13.362 "small_pool_count": 8192, 00:21:13.362 "large_pool_count": 1024, 00:21:13.362 "small_bufsize": 8192, 00:21:13.362 "large_bufsize": 135168 00:21:13.362 } 00:21:13.362 } 00:21:13.362 ] 00:21:13.362 }, 00:21:13.362 { 00:21:13.362 "subsystem": "sock", 00:21:13.362 "config": [ 00:21:13.362 { 00:21:13.362 "method": "sock_impl_set_options", 00:21:13.362 "params": { 00:21:13.362 "impl_name": "posix", 00:21:13.362 "recv_buf_size": 2097152, 00:21:13.362 "send_buf_size": 2097152, 00:21:13.362 "enable_recv_pipe": true, 00:21:13.362 "enable_quickack": false, 00:21:13.362 "enable_placement_id": 0, 00:21:13.362 "enable_zerocopy_send_server": true, 00:21:13.362 "enable_zerocopy_send_client": false, 00:21:13.362 "zerocopy_threshold": 0, 00:21:13.362 "tls_version": 0, 00:21:13.362 "enable_ktls": false 00:21:13.362 } 00:21:13.362 }, 00:21:13.362 { 00:21:13.362 "method": "sock_impl_set_options", 00:21:13.362 "params": { 00:21:13.362 "impl_name": "ssl", 00:21:13.362 "recv_buf_size": 4096, 00:21:13.362 "send_buf_size": 4096, 00:21:13.362 "enable_recv_pipe": true, 00:21:13.362 "enable_quickack": false, 00:21:13.362 "enable_placement_id": 0, 00:21:13.362 "enable_zerocopy_send_server": true, 00:21:13.362 "enable_zerocopy_send_client": false, 00:21:13.362 "zerocopy_threshold": 0, 00:21:13.362 "tls_version": 0, 00:21:13.362 "enable_ktls": false 00:21:13.362 } 00:21:13.362 } 00:21:13.362 ] 00:21:13.362 }, 00:21:13.362 { 00:21:13.362 "subsystem": "vmd", 00:21:13.362 "config": [] 00:21:13.362 }, 00:21:13.362 { 00:21:13.362 "subsystem": "accel", 00:21:13.362 "config": [ 00:21:13.362 { 00:21:13.362 "method": "accel_set_options", 00:21:13.362 "params": { 00:21:13.362 "small_cache_size": 128, 00:21:13.362 "large_cache_size": 16, 00:21:13.362 "task_count": 2048, 00:21:13.362 "sequence_count": 2048, 00:21:13.362 "buf_count": 2048 00:21:13.362 } 00:21:13.362 } 00:21:13.362 ] 00:21:13.362 }, 00:21:13.362 { 00:21:13.362 "subsystem": "bdev", 00:21:13.362 "config": [ 00:21:13.362 { 00:21:13.362 "method": "bdev_set_options", 00:21:13.362 "params": { 00:21:13.362 "bdev_io_pool_size": 65535, 00:21:13.362 "bdev_io_cache_size": 256, 00:21:13.363 "bdev_auto_examine": true, 00:21:13.363 "iobuf_small_cache_size": 128, 00:21:13.363 "iobuf_large_cache_size": 16 00:21:13.363 } 00:21:13.363 }, 00:21:13.363 { 00:21:13.363 "method": "bdev_raid_set_options", 00:21:13.363 "params": { 00:21:13.363 "process_window_size_kb": 1024 00:21:13.363 } 00:21:13.363 }, 00:21:13.363 { 00:21:13.363 "method": "bdev_iscsi_set_options", 00:21:13.363 "params": { 00:21:13.363 "timeout_sec": 30 00:21:13.363 } 00:21:13.363 }, 00:21:13.363 { 00:21:13.363 "method": "bdev_nvme_set_options", 00:21:13.363 "params": { 00:21:13.363 "action_on_timeout": "none", 00:21:13.363 "timeout_us": 0, 00:21:13.363 "timeout_admin_us": 0, 00:21:13.363 "keep_alive_timeout_ms": 10000, 00:21:13.363 "arbitration_burst": 0, 00:21:13.363 "low_priority_weight": 0, 00:21:13.363 "medium_priority_weight": 0, 00:21:13.363 "high_priority_weight": 0, 00:21:13.363 "nvme_adminq_poll_period_us": 10000, 00:21:13.363 "nvme_ioq_poll_period_us": 0, 00:21:13.363 "io_queue_requests": 512, 00:21:13.363 "delay_cmd_submit": true, 00:21:13.363 "transport_retry_count": 4, 00:21:13.363 "bdev_retry_count": 3, 00:21:13.363 "transport_ack_timeout": 0, 00:21:13.363 "ctrlr_loss_timeout_sec": 0, 00:21:13.363 "reconnect_delay_sec": 0, 00:21:13.363 "fast_io_fail_timeout_sec": 0, 00:21:13.363 "disable_auto_failback": false, 00:21:13.363 "generate_uuids": false, 00:21:13.363 "transport_tos": 0, 00:21:13.363 "nvme_error_stat": false, 00:21:13.363 "rdma_srq_size": 0, 00:21:13.363 "io_path_stat": false, 00:21:13.363 "allow_accel_sequence": false, 00:21:13.363 "rdma_max_cq_size": 0, 00:21:13.363 "rdma_cm_event_timeout_ms": 0, 00:21:13.363 "dhchap_digests": [ 00:21:13.363 "sha256", 00:21:13.363 "sha384", 00:21:13.363 "sha512" 00:21:13.363 ], 00:21:13.363 "dhchap_dhgroups": [ 00:21:13.363 "null", 00:21:13.363 "ffdhe2048", 00:21:13.363 "ffdhe3072", 00:21:13.363 "ffdhe4096", 00:21:13.363 "ffdhe6144", 00:21:13.363 "ffdhe8192" 00:21:13.363 ] 00:21:13.363 } 00:21:13.363 }, 00:21:13.363 { 00:21:13.363 "method": "bdev_nvme_attach_controller", 00:21:13.363 "params": { 00:21:13.363 "name": "TLSTEST", 00:21:13.363 "trtype": "TCP", 00:21:13.363 "adrfam": "IPv4", 00:21:13.363 "traddr": "10.0.0.2", 00:21:13.363 "trsvcid": "4420", 00:21:13.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.363 "prchk_reftag": false, 00:21:13.363 "prchk_guard": false, 00:21:13.363 "ctrlr_loss_timeout_sec": 0, 00:21:13.363 "reconnect_delay_sec": 0, 00:21:13.363 "fast_io_fail_timeout_sec": 0, 00:21:13.363 "psk": "/tmp/tmp.KTYPvbw9bK", 00:21:13.363 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:13.363 "hdgst": false, 00:21:13.363 "ddgst": false 00:21:13.363 } 00:21:13.363 }, 00:21:13.363 { 00:21:13.363 "method": "bdev_nvme_set_hotplug", 00:21:13.363 "params": { 00:21:13.363 "period_us": 100000, 00:21:13.363 "enable": false 00:21:13.363 } 00:21:13.363 }, 00:21:13.363 { 00:21:13.363 "method": "bdev_wait_for_examine" 00:21:13.363 } 00:21:13.363 ] 00:21:13.363 }, 00:21:13.363 { 00:21:13.363 "subsystem": "nbd", 00:21:13.363 "config": [] 00:21:13.363 } 00:21:13.363 ] 00:21:13.363 }' 00:21:13.363 08:33:00 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 326445 00:21:13.363 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 326445 ']' 00:21:13.363 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 326445 00:21:13.363 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:13.363 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:13.363 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 326445 00:21:13.363 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:13.363 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:13.363 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 326445' 00:21:13.363 killing process with pid 326445 00:21:13.363 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 326445 00:21:13.363 Received shutdown signal, test time was about 10.000000 seconds 00:21:13.363 00:21:13.363 Latency(us) 00:21:13.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.363 =================================================================================================================== 00:21:13.363 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:13.363 [2024-05-15 08:33:00.302545] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:13.363 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 326445 00:21:13.623 08:33:00 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 326181 00:21:13.623 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 326181 ']' 00:21:13.623 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 326181 00:21:13.623 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:13.623 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:13.623 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 326181 00:21:13.623 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:13.623 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:13.623 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 326181' 00:21:13.623 killing process with pid 326181 00:21:13.623 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 326181 00:21:13.623 [2024-05-15 08:33:00.548835] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:13.623 [2024-05-15 08:33:00.548868] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:13.623 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 326181 00:21:13.884 08:33:00 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:13.884 08:33:00 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:13.884 "subsystems": [ 00:21:13.884 { 00:21:13.884 "subsystem": "keyring", 00:21:13.884 "config": [] 00:21:13.884 }, 00:21:13.884 { 00:21:13.884 "subsystem": "iobuf", 00:21:13.884 "config": [ 00:21:13.884 { 00:21:13.884 "method": "iobuf_set_options", 00:21:13.884 "params": { 00:21:13.884 "small_pool_count": 8192, 00:21:13.884 "large_pool_count": 1024, 00:21:13.884 "small_bufsize": 8192, 00:21:13.884 "large_bufsize": 135168 00:21:13.884 } 00:21:13.884 } 00:21:13.884 ] 00:21:13.884 }, 00:21:13.884 { 00:21:13.884 "subsystem": "sock", 00:21:13.884 "config": [ 00:21:13.884 { 00:21:13.884 "method": "sock_impl_set_options", 00:21:13.884 "params": { 00:21:13.884 "impl_name": "posix", 00:21:13.884 "recv_buf_size": 2097152, 00:21:13.884 "send_buf_size": 2097152, 00:21:13.884 "enable_recv_pipe": true, 00:21:13.884 "enable_quickack": false, 00:21:13.884 "enable_placement_id": 0, 00:21:13.884 "enable_zerocopy_send_server": true, 00:21:13.884 "enable_zerocopy_send_client": false, 00:21:13.884 "zerocopy_threshold": 0, 00:21:13.884 "tls_version": 0, 00:21:13.884 "enable_ktls": false 00:21:13.884 } 00:21:13.884 }, 00:21:13.884 { 00:21:13.884 "method": "sock_impl_set_options", 00:21:13.884 "params": { 00:21:13.884 "impl_name": "ssl", 00:21:13.884 "recv_buf_size": 4096, 00:21:13.884 "send_buf_size": 4096, 00:21:13.884 "enable_recv_pipe": true, 00:21:13.884 "enable_quickack": false, 00:21:13.884 "enable_placement_id": 0, 00:21:13.884 "enable_zerocopy_send_server": true, 00:21:13.884 "enable_zerocopy_send_client": false, 00:21:13.884 "zerocopy_threshold": 0, 00:21:13.884 "tls_version": 0, 00:21:13.884 "enable_ktls": false 00:21:13.884 } 00:21:13.884 } 00:21:13.884 ] 00:21:13.884 }, 00:21:13.884 { 00:21:13.884 "subsystem": "vmd", 00:21:13.884 "config": [] 00:21:13.884 }, 00:21:13.884 { 00:21:13.884 "subsystem": "accel", 00:21:13.884 "config": [ 00:21:13.884 { 00:21:13.884 "method": "accel_set_options", 00:21:13.884 "params": { 00:21:13.884 "small_cache_size": 128, 00:21:13.884 "large_cache_size": 16, 00:21:13.884 "task_count": 2048, 00:21:13.884 "sequence_count": 2048, 00:21:13.884 "buf_count": 2048 00:21:13.884 } 00:21:13.884 } 00:21:13.884 ] 00:21:13.884 }, 00:21:13.884 { 00:21:13.884 "subsystem": "bdev", 00:21:13.884 "config": [ 00:21:13.884 { 00:21:13.884 "method": "bdev_set_options", 00:21:13.884 "params": { 00:21:13.884 "bdev_io_pool_size": 65535, 00:21:13.884 "bdev_io_cache_size": 256, 00:21:13.884 "bdev_auto_examine": true, 00:21:13.884 "iobuf_small_cache_size": 128, 00:21:13.884 "iobuf_large_cache_size": 16 00:21:13.884 } 00:21:13.884 }, 00:21:13.884 { 00:21:13.884 "method": "bdev_raid_set_options", 00:21:13.884 "params": { 00:21:13.884 "process_window_size_kb": 1024 00:21:13.884 } 00:21:13.884 }, 00:21:13.884 { 00:21:13.884 "method": "bdev_iscsi_set_options", 00:21:13.884 "params": { 00:21:13.884 "timeout_sec": 30 00:21:13.884 } 00:21:13.884 }, 00:21:13.884 { 00:21:13.884 "method": "bdev_nvme_set_options", 00:21:13.884 "params": { 00:21:13.884 "action_on_timeout": "none", 00:21:13.885 "timeout_us": 0, 00:21:13.885 "timeout_admin_us": 0, 00:21:13.885 "keep_alive_timeout_ms": 10000, 00:21:13.885 "arbitration_burst": 0, 00:21:13.885 "low_priority_weight": 0, 00:21:13.885 "medium_priority_weight": 0, 00:21:13.885 "high_priority_weight": 0, 00:21:13.885 "nvme_adminq_poll_period_us": 10000, 00:21:13.885 "nvme_ioq_poll_period_us": 0, 00:21:13.885 "io_queue_requests": 0, 00:21:13.885 "delay_cmd_submit": true, 00:21:13.885 "transport_retry_count": 4, 00:21:13.885 "bdev_retry_count": 3, 00:21:13.885 "transport_ack_timeout": 0, 00:21:13.885 "ctrlr_loss_timeout_sec": 0, 00:21:13.885 "reconnect_delay_sec": 0, 00:21:13.885 "fast_io_fail_timeout_sec": 0, 00:21:13.885 "disable_auto_failback": false, 00:21:13.885 "generate_uuids": false, 00:21:13.885 "transport_tos": 0, 00:21:13.885 "nvme_error_stat": false, 00:21:13.885 "rdma_srq_size": 0, 00:21:13.885 "io_path_stat": false, 00:21:13.885 "allow_accel_sequence": false, 00:21:13.885 "rdma_max_cq_size": 0, 00:21:13.885 "rdma_cm_event_timeout_ms": 0, 00:21:13.885 "dhchap_digests": [ 00:21:13.885 "sha256", 00:21:13.885 "sha384", 00:21:13.885 "sha512" 00:21:13.885 ], 00:21:13.885 "dhchap_dhgroups": [ 00:21:13.885 "null", 00:21:13.885 "ffdhe2048", 00:21:13.885 "ffdhe3072", 00:21:13.885 "ffdhe4096", 00:21:13.885 "ffdhe6144", 00:21:13.885 "ffdhe8192" 00:21:13.885 ] 00:21:13.885 } 00:21:13.885 }, 00:21:13.885 { 00:21:13.885 "method": "bdev_nvme_set_hotplug", 00:21:13.885 "params": { 00:21:13.885 "period_us": 100000, 00:21:13.885 "enable": false 00:21:13.885 } 00:21:13.885 }, 00:21:13.885 { 00:21:13.885 "method": "bdev_malloc_create", 00:21:13.885 "params": { 00:21:13.885 "name": "malloc0", 00:21:13.885 "num_blocks": 8192, 00:21:13.885 "block_size": 4096, 00:21:13.885 "physical_block_size": 4096, 00:21:13.885 "uuid": "036299b0-3487-425c-b9a7-afe904812414", 00:21:13.885 "optimal_io_boundary": 0 00:21:13.885 } 00:21:13.885 }, 00:21:13.885 { 00:21:13.885 "method": "bdev_wait_for_examine" 00:21:13.885 } 00:21:13.885 ] 00:21:13.885 }, 00:21:13.885 { 00:21:13.885 "subsystem": "nbd", 00:21:13.885 "config": [] 00:21:13.885 }, 00:21:13.885 { 00:21:13.885 "subsystem": "scheduler", 00:21:13.885 "config": [ 00:21:13.885 { 00:21:13.885 "method": "framework_set_scheduler", 00:21:13.885 "params": { 00:21:13.885 "name": "static" 00:21:13.885 } 00:21:13.885 } 00:21:13.885 ] 00:21:13.885 }, 00:21:13.885 { 00:21:13.885 "subsystem": "nvmf", 00:21:13.885 "config": [ 00:21:13.885 { 00:21:13.885 "method": "nvmf_set_config", 00:21:13.885 "params": { 00:21:13.885 "discovery_filter": "match_any", 00:21:13.885 "admin_cmd_passthru": { 00:21:13.885 "identify_ctrlr": false 00:21:13.885 } 00:21:13.885 } 00:21:13.885 }, 00:21:13.885 { 00:21:13.885 "method": "nvmf_set_max_subsystems", 00:21:13.885 "params": { 00:21:13.885 "max_subsystems": 1024 00:21:13.885 } 00:21:13.885 }, 00:21:13.885 { 00:21:13.885 "method": "nvmf_set_crdt", 00:21:13.885 "params": { 00:21:13.885 "crdt1": 0, 00:21:13.885 "crdt2": 0, 00:21:13.885 "crdt3": 0 00:21:13.885 } 00:21:13.885 }, 00:21:13.885 { 00:21:13.885 "method": "nvmf_create_transport", 00:21:13.885 "params": { 00:21:13.885 "trtype": "TCP", 00:21:13.885 "max_queue_depth": 128, 00:21:13.885 "max_io_qpairs_per_ctrlr": 127, 00:21:13.885 "in_capsule_data_size": 4096, 00:21:13.885 "max_io_size": 131072, 00:21:13.885 "io_unit_size": 131072, 00:21:13.885 "max_aq_depth": 128, 00:21:13.885 "num_shared_buffers": 511, 00:21:13.885 "buf_cache_size": 4294967295, 00:21:13.885 "dif_insert_or_strip": false, 00:21:13.885 "zcopy": false, 00:21:13.885 "c2h_success": false, 00:21:13.885 "sock_priority": 0, 00:21:13.885 "abort_timeout_sec": 1, 00:21:13.885 "ack_timeout": 0, 00:21:13.885 "data_wr_pool_size": 0 00:21:13.885 } 00:21:13.885 }, 00:21:13.885 { 00:21:13.885 "method": "nvmf_create_subsystem", 00:21:13.885 "params": { 00:21:13.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.885 "allow_any_host": false, 00:21:13.885 "serial_number": "SPDK00000000000001", 00:21:13.885 "model_number": "SPDK bdev Controller", 00:21:13.885 "max_namespaces": 10, 00:21:13.885 "min_cntlid": 1, 00:21:13.885 "max_cntlid": 65519, 00:21:13.885 "ana_reporting": false 00:21:13.885 } 00:21:13.885 }, 00:21:13.885 { 00:21:13.885 "method": "nvmf_subsystem_add_host", 00:21:13.885 "params": { 00:21:13.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.885 "host": "nqn.2016-06.io.spdk:host1", 00:21:13.885 "psk": "/tmp/tmp.KTYPvbw9bK" 00:21:13.885 } 00:21:13.885 }, 00:21:13.885 { 00:21:13.885 "method": "nvmf_subsystem_add_ns", 00:21:13.885 "params": { 00:21:13.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.885 "namespace": { 00:21:13.885 "nsid": 1, 00:21:13.885 "bdev_name": "malloc0", 00:21:13.885 "nguid": "036299B03487425CB9A7AFE904812414", 00:21:13.885 "uuid": "036299b0-3487-425c-b9a7-afe904812414", 00:21:13.885 "no_auto_visible": false 00:21:13.885 } 00:21:13.885 } 00:21:13.885 }, 00:21:13.885 { 00:21:13.885 "method": "nvmf_subsystem_add_listener", 00:21:13.885 "params": { 00:21:13.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.885 "listen_address": { 00:21:13.885 "trtype": "TCP", 00:21:13.885 "adrfam": "IPv4", 00:21:13.885 "traddr": "10.0.0.2", 00:21:13.885 "trsvcid": "4420" 00:21:13.885 }, 00:21:13.885 "secure_channel": true 00:21:13.885 } 00:21:13.885 } 00:21:13.885 ] 00:21:13.885 } 00:21:13.885 ] 00:21:13.885 }' 00:21:13.885 08:33:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:13.885 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:13.885 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.885 08:33:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=326958 00:21:13.885 08:33:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 326958 00:21:13.885 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 326958 ']' 00:21:13.885 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.885 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:13.885 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.885 08:33:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:13.885 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:13.885 08:33:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.885 [2024-05-15 08:33:00.823194] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:21:13.885 [2024-05-15 08:33:00.823240] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.885 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.885 [2024-05-15 08:33:00.879971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.145 [2024-05-15 08:33:00.959522] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.145 [2024-05-15 08:33:00.959557] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.145 [2024-05-15 08:33:00.959564] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.145 [2024-05-15 08:33:00.959570] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.145 [2024-05-15 08:33:00.959578] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.145 [2024-05-15 08:33:00.959649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.145 [2024-05-15 08:33:01.154671] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.405 [2024-05-15 08:33:01.170630] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:14.405 [2024-05-15 08:33:01.186662] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:14.405 [2024-05-15 08:33:01.186704] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.405 [2024-05-15 08:33:01.200454] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.665 08:33:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:14.666 08:33:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:14.666 08:33:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:14.666 08:33:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.666 08:33:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.666 08:33:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.666 08:33:01 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=327059 00:21:14.666 08:33:01 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 327059 /var/tmp/bdevperf.sock 00:21:14.666 08:33:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 327059 ']' 00:21:14.666 08:33:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.666 08:33:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:14.666 08:33:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.666 08:33:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:14.666 08:33:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.666 08:33:01 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:14.666 08:33:01 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:14.666 "subsystems": [ 00:21:14.666 { 00:21:14.666 "subsystem": "keyring", 00:21:14.666 "config": [] 00:21:14.666 }, 00:21:14.666 { 00:21:14.666 "subsystem": "iobuf", 00:21:14.666 "config": [ 00:21:14.666 { 00:21:14.666 "method": "iobuf_set_options", 00:21:14.666 "params": { 00:21:14.666 "small_pool_count": 8192, 00:21:14.666 "large_pool_count": 1024, 00:21:14.666 "small_bufsize": 8192, 00:21:14.666 "large_bufsize": 135168 00:21:14.666 } 00:21:14.666 } 00:21:14.666 ] 00:21:14.666 }, 00:21:14.666 { 00:21:14.666 "subsystem": "sock", 00:21:14.666 "config": [ 00:21:14.666 { 00:21:14.666 "method": "sock_impl_set_options", 00:21:14.666 "params": { 00:21:14.666 "impl_name": "posix", 00:21:14.666 "recv_buf_size": 2097152, 00:21:14.666 "send_buf_size": 2097152, 00:21:14.666 "enable_recv_pipe": true, 00:21:14.666 "enable_quickack": false, 00:21:14.666 "enable_placement_id": 0, 00:21:14.666 "enable_zerocopy_send_server": true, 00:21:14.666 "enable_zerocopy_send_client": false, 00:21:14.666 "zerocopy_threshold": 0, 00:21:14.666 "tls_version": 0, 00:21:14.666 "enable_ktls": false 00:21:14.666 } 00:21:14.666 }, 00:21:14.666 { 00:21:14.666 "method": "sock_impl_set_options", 00:21:14.666 "params": { 00:21:14.666 "impl_name": "ssl", 00:21:14.666 "recv_buf_size": 4096, 00:21:14.666 "send_buf_size": 4096, 00:21:14.666 "enable_recv_pipe": true, 00:21:14.666 "enable_quickack": false, 00:21:14.666 "enable_placement_id": 0, 00:21:14.666 "enable_zerocopy_send_server": true, 00:21:14.666 "enable_zerocopy_send_client": false, 00:21:14.666 "zerocopy_threshold": 0, 00:21:14.666 "tls_version": 0, 00:21:14.666 "enable_ktls": false 00:21:14.666 } 00:21:14.666 } 00:21:14.666 ] 00:21:14.666 }, 00:21:14.666 { 00:21:14.666 "subsystem": "vmd", 00:21:14.666 "config": [] 00:21:14.666 }, 00:21:14.666 { 00:21:14.666 "subsystem": "accel", 00:21:14.666 "config": [ 00:21:14.666 { 00:21:14.666 "method": "accel_set_options", 00:21:14.666 "params": { 00:21:14.666 "small_cache_size": 128, 00:21:14.666 "large_cache_size": 16, 00:21:14.666 "task_count": 2048, 00:21:14.666 "sequence_count": 2048, 00:21:14.666 "buf_count": 2048 00:21:14.666 } 00:21:14.666 } 00:21:14.666 ] 00:21:14.666 }, 00:21:14.666 { 00:21:14.666 "subsystem": "bdev", 00:21:14.666 "config": [ 00:21:14.666 { 00:21:14.666 "method": "bdev_set_options", 00:21:14.666 "params": { 00:21:14.666 "bdev_io_pool_size": 65535, 00:21:14.666 "bdev_io_cache_size": 256, 00:21:14.666 "bdev_auto_examine": true, 00:21:14.666 "iobuf_small_cache_size": 128, 00:21:14.666 "iobuf_large_cache_size": 16 00:21:14.666 } 00:21:14.666 }, 00:21:14.666 { 00:21:14.666 "method": "bdev_raid_set_options", 00:21:14.666 "params": { 00:21:14.666 "process_window_size_kb": 1024 00:21:14.666 } 00:21:14.666 }, 00:21:14.666 { 00:21:14.666 "method": "bdev_iscsi_set_options", 00:21:14.666 "params": { 00:21:14.666 "timeout_sec": 30 00:21:14.666 } 00:21:14.666 }, 00:21:14.666 { 00:21:14.666 "method": "bdev_nvme_set_options", 00:21:14.666 "params": { 00:21:14.666 "action_on_timeout": "none", 00:21:14.666 "timeout_us": 0, 00:21:14.666 "timeout_admin_us": 0, 00:21:14.666 "keep_alive_timeout_ms": 10000, 00:21:14.666 "arbitration_burst": 0, 00:21:14.666 "low_priority_weight": 0, 00:21:14.666 "medium_priority_weight": 0, 00:21:14.666 "high_priority_weight": 0, 00:21:14.666 "nvme_adminq_poll_period_us": 10000, 00:21:14.666 "nvme_ioq_poll_period_us": 0, 00:21:14.666 "io_queue_requests": 512, 00:21:14.666 "delay_cmd_submit": true, 00:21:14.666 "transport_retry_count": 4, 00:21:14.666 "bdev_retry_count": 3, 00:21:14.666 "transport_ack_timeout": 0, 00:21:14.666 "ctrlr_loss_timeout_sec": 0, 00:21:14.666 "reconnect_delay_sec": 0, 00:21:14.666 "fast_io_fail_timeout_sec": 0, 00:21:14.666 "disable_auto_failback": false, 00:21:14.666 "generate_uuids": false, 00:21:14.666 "transport_tos": 0, 00:21:14.666 "nvme_error_stat": false, 00:21:14.666 "rdma_srq_size": 0, 00:21:14.666 "io_path_stat": false, 00:21:14.666 "allow_accel_sequence": false, 00:21:14.666 "rdma_max_cq_size": 0, 00:21:14.666 "rdma_cm_event_timeout_ms": 0, 00:21:14.666 "dhchap_digests": [ 00:21:14.666 "sha256", 00:21:14.666 "sha384", 00:21:14.666 "sha512" 00:21:14.666 ], 00:21:14.666 "dhchap_dhgroups": [ 00:21:14.666 "null", 00:21:14.666 "ffdhe2048", 00:21:14.666 "ffdhe3072", 00:21:14.666 "ffdhe4096", 00:21:14.666 "ffdhe6144", 00:21:14.666 "ffdhe8192" 00:21:14.666 ] 00:21:14.666 } 00:21:14.666 }, 00:21:14.666 { 00:21:14.666 "method": "bdev_nvme_attach_controller", 00:21:14.666 "params": { 00:21:14.666 "name": "TLSTEST", 00:21:14.666 "trtype": "TCP", 00:21:14.666 "adrfam": "IPv4", 00:21:14.666 "traddr": "10.0.0.2", 00:21:14.666 "trsvcid": "4420", 00:21:14.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.666 "prchk_reftag": false, 00:21:14.666 "prchk_guard": false, 00:21:14.666 "ctrlr_loss_timeout_sec": 0, 00:21:14.666 "reconnect_delay_sec": 0, 00:21:14.666 "fast_io_fail_timeout_sec": 0, 00:21:14.666 "psk": "/tmp/tmp.KTYPvbw9bK", 00:21:14.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:14.666 "hdgst": false, 00:21:14.666 "ddgst": false 00:21:14.666 } 00:21:14.666 }, 00:21:14.666 { 00:21:14.666 "method": "bdev_nvme_set_hotplug", 00:21:14.666 "params": { 00:21:14.666 "period_us": 100000, 00:21:14.666 "enable": false 00:21:14.666 } 00:21:14.666 }, 00:21:14.666 { 00:21:14.666 "method": "bdev_wait_for_examine" 00:21:14.666 } 00:21:14.666 ] 00:21:14.666 }, 00:21:14.666 { 00:21:14.666 "subsystem": "nbd", 00:21:14.666 "config": [] 00:21:14.666 } 00:21:14.666 ] 00:21:14.666 }' 00:21:14.666 [2024-05-15 08:33:01.676028] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:21:14.666 [2024-05-15 08:33:01.676079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid327059 ] 00:21:14.926 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.926 [2024-05-15 08:33:01.725931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.926 [2024-05-15 08:33:01.803707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.926 [2024-05-15 08:33:01.936958] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:14.926 [2024-05-15 08:33:01.937051] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:15.495 08:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:15.495 08:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:15.495 08:33:02 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:15.755 Running I/O for 10 seconds... 00:21:25.733 00:21:25.733 Latency(us) 00:21:25.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.733 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:25.733 Verification LBA range: start 0x0 length 0x2000 00:21:25.733 TLSTESTn1 : 10.01 5130.52 20.04 0.00 0.00 24914.19 5299.87 39435.58 00:21:25.733 =================================================================================================================== 00:21:25.733 Total : 5130.52 20.04 0.00 0.00 24914.19 5299.87 39435.58 00:21:25.733 0 00:21:25.733 08:33:12 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:25.733 08:33:12 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 327059 00:21:25.733 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 327059 ']' 00:21:25.733 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 327059 00:21:25.733 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:25.733 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:25.733 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 327059 00:21:25.733 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:25.733 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:25.733 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 327059' 00:21:25.733 killing process with pid 327059 00:21:25.733 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 327059 00:21:25.733 Received shutdown signal, test time was about 10.000000 seconds 00:21:25.733 00:21:25.733 Latency(us) 00:21:25.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.733 =================================================================================================================== 00:21:25.733 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.733 [2024-05-15 08:33:12.667226] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:25.733 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 327059 00:21:25.993 08:33:12 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 326958 00:21:25.993 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 326958 ']' 00:21:25.993 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 326958 00:21:25.993 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:25.993 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:25.993 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 326958 00:21:25.994 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:25.994 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:25.994 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 326958' 00:21:25.994 killing process with pid 326958 00:21:25.994 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 326958 00:21:25.994 [2024-05-15 08:33:12.912761] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:25.994 [2024-05-15 08:33:12.912796] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:25.994 08:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 326958 00:21:26.253 08:33:13 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:26.253 08:33:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:26.253 08:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:26.254 08:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.254 08:33:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:26.254 08:33:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=329322 00:21:26.254 08:33:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 329322 00:21:26.254 08:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 329322 ']' 00:21:26.254 08:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.254 08:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:26.254 08:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.254 08:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:26.254 08:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.254 [2024-05-15 08:33:13.163592] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:21:26.254 [2024-05-15 08:33:13.163638] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.254 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.254 [2024-05-15 08:33:13.218252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.513 [2024-05-15 08:33:13.298081] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.513 [2024-05-15 08:33:13.298114] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.513 [2024-05-15 08:33:13.298121] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.513 [2024-05-15 08:33:13.298127] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.513 [2024-05-15 08:33:13.298133] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.513 [2024-05-15 08:33:13.298155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.082 08:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:27.082 08:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:27.082 08:33:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:27.082 08:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:27.082 08:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.082 08:33:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.082 08:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.KTYPvbw9bK 00:21:27.082 08:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.KTYPvbw9bK 00:21:27.082 08:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:27.342 [2024-05-15 08:33:14.162206] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.342 08:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:27.342 08:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:27.601 [2024-05-15 08:33:14.503042] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:27.601 [2024-05-15 08:33:14.503088] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:27.601 [2024-05-15 08:33:14.503248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.601 08:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:27.861 malloc0 00:21:27.861 08:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:27.861 08:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KTYPvbw9bK 00:21:28.121 [2024-05-15 08:33:15.008426] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:28.121 08:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=329777 00:21:28.121 08:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:28.121 08:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:28.121 08:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 329777 /var/tmp/bdevperf.sock 00:21:28.121 08:33:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 329777 ']' 00:21:28.121 08:33:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:28.121 08:33:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:28.121 08:33:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:28.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:28.121 08:33:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:28.121 08:33:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.121 [2024-05-15 08:33:15.072197] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:21:28.121 [2024-05-15 08:33:15.072241] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329777 ] 00:21:28.121 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.121 [2024-05-15 08:33:15.125939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.381 [2024-05-15 08:33:15.200342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.948 08:33:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:28.948 08:33:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:28.948 08:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KTYPvbw9bK 00:21:29.208 08:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:29.208 [2024-05-15 08:33:16.160028] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:29.467 nvme0n1 00:21:29.467 08:33:16 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:29.467 Running I/O for 1 seconds... 00:21:30.406 00:21:30.406 Latency(us) 00:21:30.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.406 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:30.406 Verification LBA range: start 0x0 length 0x2000 00:21:30.406 nvme0n1 : 1.03 5005.15 19.55 0.00 0.00 25234.70 4986.43 25416.57 00:21:30.406 =================================================================================================================== 00:21:30.406 Total : 5005.15 19.55 0.00 0.00 25234.70 4986.43 25416.57 00:21:30.406 0 00:21:30.406 08:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 329777 00:21:30.406 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 329777 ']' 00:21:30.406 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 329777 00:21:30.406 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:30.407 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:30.407 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 329777 00:21:30.407 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:30.407 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:30.407 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 329777' 00:21:30.407 killing process with pid 329777 00:21:30.407 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 329777 00:21:30.407 Received shutdown signal, test time was about 1.000000 seconds 00:21:30.407 00:21:30.407 Latency(us) 00:21:30.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.407 =================================================================================================================== 00:21:30.407 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.407 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 329777 00:21:30.666 08:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 329322 00:21:30.666 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 329322 ']' 00:21:30.666 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 329322 00:21:30.667 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:30.667 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:30.667 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 329322 00:21:30.667 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:30.667 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:30.667 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 329322' 00:21:30.667 killing process with pid 329322 00:21:30.667 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 329322 00:21:30.667 [2024-05-15 08:33:17.666082] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:30.667 [2024-05-15 08:33:17.666125] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:30.667 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 329322 00:21:30.927 08:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:30.927 08:33:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:30.927 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:30.927 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.927 08:33:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=330255 00:21:30.927 08:33:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 330255 00:21:30.927 08:33:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:30.927 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 330255 ']' 00:21:30.927 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.927 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:30.927 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.927 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:30.927 08:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.927 [2024-05-15 08:33:17.931823] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:21:30.927 [2024-05-15 08:33:17.931868] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.187 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.187 [2024-05-15 08:33:17.987622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.187 [2024-05-15 08:33:18.064644] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.187 [2024-05-15 08:33:18.064680] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.187 [2024-05-15 08:33:18.064687] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.187 [2024-05-15 08:33:18.064693] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.187 [2024-05-15 08:33:18.064698] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.187 [2024-05-15 08:33:18.064716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.756 08:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:31.756 08:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:31.756 08:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:31.756 08:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:31.756 08:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.756 08:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.756 08:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:31.756 08:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.756 08:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.756 [2024-05-15 08:33:18.774493] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.015 malloc0 00:21:32.015 [2024-05-15 08:33:18.802801] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:32.015 [2024-05-15 08:33:18.802872] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:32.015 [2024-05-15 08:33:18.803034] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.015 08:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.015 08:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=330363 00:21:32.015 08:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 330363 /var/tmp/bdevperf.sock 00:21:32.015 08:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:32.016 08:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 330363 ']' 00:21:32.016 08:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:32.016 08:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:32.016 08:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:32.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:32.016 08:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:32.016 08:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.016 [2024-05-15 08:33:18.861407] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:21:32.016 [2024-05-15 08:33:18.861448] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330363 ] 00:21:32.016 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.016 [2024-05-15 08:33:18.911712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.016 [2024-05-15 08:33:18.990217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.954 08:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:32.954 08:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:32.954 08:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KTYPvbw9bK 00:21:32.954 08:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:32.954 [2024-05-15 08:33:19.950408] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:33.214 nvme0n1 00:21:33.214 08:33:20 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:33.214 Running I/O for 1 seconds... 00:21:34.152 00:21:34.152 Latency(us) 00:21:34.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.152 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:34.152 Verification LBA range: start 0x0 length 0x2000 00:21:34.152 nvme0n1 : 1.02 4646.33 18.15 0.00 0.00 27299.37 5157.40 55848.07 00:21:34.152 =================================================================================================================== 00:21:34.152 Total : 4646.33 18.15 0.00 0.00 27299.37 5157.40 55848.07 00:21:34.152 0 00:21:34.152 08:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:34.152 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.152 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.412 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.412 08:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:34.412 "subsystems": [ 00:21:34.412 { 00:21:34.412 "subsystem": "keyring", 00:21:34.412 "config": [ 00:21:34.412 { 00:21:34.412 "method": "keyring_file_add_key", 00:21:34.412 "params": { 00:21:34.412 "name": "key0", 00:21:34.412 "path": "/tmp/tmp.KTYPvbw9bK" 00:21:34.412 } 00:21:34.412 } 00:21:34.412 ] 00:21:34.412 }, 00:21:34.412 { 00:21:34.412 "subsystem": "iobuf", 00:21:34.412 "config": [ 00:21:34.412 { 00:21:34.412 "method": "iobuf_set_options", 00:21:34.412 "params": { 00:21:34.412 "small_pool_count": 8192, 00:21:34.412 "large_pool_count": 1024, 00:21:34.412 "small_bufsize": 8192, 00:21:34.412 "large_bufsize": 135168 00:21:34.412 } 00:21:34.412 } 00:21:34.413 ] 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "subsystem": "sock", 00:21:34.413 "config": [ 00:21:34.413 { 00:21:34.413 "method": "sock_impl_set_options", 00:21:34.413 "params": { 00:21:34.413 "impl_name": "posix", 00:21:34.413 "recv_buf_size": 2097152, 00:21:34.413 "send_buf_size": 2097152, 00:21:34.413 "enable_recv_pipe": true, 00:21:34.413 "enable_quickack": false, 00:21:34.413 "enable_placement_id": 0, 00:21:34.413 "enable_zerocopy_send_server": true, 00:21:34.413 "enable_zerocopy_send_client": false, 00:21:34.413 "zerocopy_threshold": 0, 00:21:34.413 "tls_version": 0, 00:21:34.413 "enable_ktls": false 00:21:34.413 } 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "method": "sock_impl_set_options", 00:21:34.413 "params": { 00:21:34.413 "impl_name": "ssl", 00:21:34.413 "recv_buf_size": 4096, 00:21:34.413 "send_buf_size": 4096, 00:21:34.413 "enable_recv_pipe": true, 00:21:34.413 "enable_quickack": false, 00:21:34.413 "enable_placement_id": 0, 00:21:34.413 "enable_zerocopy_send_server": true, 00:21:34.413 "enable_zerocopy_send_client": false, 00:21:34.413 "zerocopy_threshold": 0, 00:21:34.413 "tls_version": 0, 00:21:34.413 "enable_ktls": false 00:21:34.413 } 00:21:34.413 } 00:21:34.413 ] 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "subsystem": "vmd", 00:21:34.413 "config": [] 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "subsystem": "accel", 00:21:34.413 "config": [ 00:21:34.413 { 00:21:34.413 "method": "accel_set_options", 00:21:34.413 "params": { 00:21:34.413 "small_cache_size": 128, 00:21:34.413 "large_cache_size": 16, 00:21:34.413 "task_count": 2048, 00:21:34.413 "sequence_count": 2048, 00:21:34.413 "buf_count": 2048 00:21:34.413 } 00:21:34.413 } 00:21:34.413 ] 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "subsystem": "bdev", 00:21:34.413 "config": [ 00:21:34.413 { 00:21:34.413 "method": "bdev_set_options", 00:21:34.413 "params": { 00:21:34.413 "bdev_io_pool_size": 65535, 00:21:34.413 "bdev_io_cache_size": 256, 00:21:34.413 "bdev_auto_examine": true, 00:21:34.413 "iobuf_small_cache_size": 128, 00:21:34.413 "iobuf_large_cache_size": 16 00:21:34.413 } 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "method": "bdev_raid_set_options", 00:21:34.413 "params": { 00:21:34.413 "process_window_size_kb": 1024 00:21:34.413 } 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "method": "bdev_iscsi_set_options", 00:21:34.413 "params": { 00:21:34.413 "timeout_sec": 30 00:21:34.413 } 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "method": "bdev_nvme_set_options", 00:21:34.413 "params": { 00:21:34.413 "action_on_timeout": "none", 00:21:34.413 "timeout_us": 0, 00:21:34.413 "timeout_admin_us": 0, 00:21:34.413 "keep_alive_timeout_ms": 10000, 00:21:34.413 "arbitration_burst": 0, 00:21:34.413 "low_priority_weight": 0, 00:21:34.413 "medium_priority_weight": 0, 00:21:34.413 "high_priority_weight": 0, 00:21:34.413 "nvme_adminq_poll_period_us": 10000, 00:21:34.413 "nvme_ioq_poll_period_us": 0, 00:21:34.413 "io_queue_requests": 0, 00:21:34.413 "delay_cmd_submit": true, 00:21:34.413 "transport_retry_count": 4, 00:21:34.413 "bdev_retry_count": 3, 00:21:34.413 "transport_ack_timeout": 0, 00:21:34.413 "ctrlr_loss_timeout_sec": 0, 00:21:34.413 "reconnect_delay_sec": 0, 00:21:34.413 "fast_io_fail_timeout_sec": 0, 00:21:34.413 "disable_auto_failback": false, 00:21:34.413 "generate_uuids": false, 00:21:34.413 "transport_tos": 0, 00:21:34.413 "nvme_error_stat": false, 00:21:34.413 "rdma_srq_size": 0, 00:21:34.413 "io_path_stat": false, 00:21:34.413 "allow_accel_sequence": false, 00:21:34.413 "rdma_max_cq_size": 0, 00:21:34.413 "rdma_cm_event_timeout_ms": 0, 00:21:34.413 "dhchap_digests": [ 00:21:34.413 "sha256", 00:21:34.413 "sha384", 00:21:34.413 "sha512" 00:21:34.413 ], 00:21:34.413 "dhchap_dhgroups": [ 00:21:34.413 "null", 00:21:34.413 "ffdhe2048", 00:21:34.413 "ffdhe3072", 00:21:34.413 "ffdhe4096", 00:21:34.413 "ffdhe6144", 00:21:34.413 "ffdhe8192" 00:21:34.413 ] 00:21:34.413 } 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "method": "bdev_nvme_set_hotplug", 00:21:34.413 "params": { 00:21:34.413 "period_us": 100000, 00:21:34.413 "enable": false 00:21:34.413 } 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "method": "bdev_malloc_create", 00:21:34.413 "params": { 00:21:34.413 "name": "malloc0", 00:21:34.413 "num_blocks": 8192, 00:21:34.413 "block_size": 4096, 00:21:34.413 "physical_block_size": 4096, 00:21:34.413 "uuid": "b0103c20-17fe-46a7-b70a-534b7497b696", 00:21:34.413 "optimal_io_boundary": 0 00:21:34.413 } 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "method": "bdev_wait_for_examine" 00:21:34.413 } 00:21:34.413 ] 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "subsystem": "nbd", 00:21:34.413 "config": [] 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "subsystem": "scheduler", 00:21:34.413 "config": [ 00:21:34.413 { 00:21:34.413 "method": "framework_set_scheduler", 00:21:34.413 "params": { 00:21:34.413 "name": "static" 00:21:34.413 } 00:21:34.413 } 00:21:34.413 ] 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "subsystem": "nvmf", 00:21:34.413 "config": [ 00:21:34.413 { 00:21:34.413 "method": "nvmf_set_config", 00:21:34.413 "params": { 00:21:34.413 "discovery_filter": "match_any", 00:21:34.413 "admin_cmd_passthru": { 00:21:34.413 "identify_ctrlr": false 00:21:34.413 } 00:21:34.413 } 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "method": "nvmf_set_max_subsystems", 00:21:34.413 "params": { 00:21:34.413 "max_subsystems": 1024 00:21:34.413 } 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "method": "nvmf_set_crdt", 00:21:34.413 "params": { 00:21:34.413 "crdt1": 0, 00:21:34.413 "crdt2": 0, 00:21:34.413 "crdt3": 0 00:21:34.413 } 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "method": "nvmf_create_transport", 00:21:34.413 "params": { 00:21:34.413 "trtype": "TCP", 00:21:34.413 "max_queue_depth": 128, 00:21:34.413 "max_io_qpairs_per_ctrlr": 127, 00:21:34.413 "in_capsule_data_size": 4096, 00:21:34.413 "max_io_size": 131072, 00:21:34.413 "io_unit_size": 131072, 00:21:34.413 "max_aq_depth": 128, 00:21:34.413 "num_shared_buffers": 511, 00:21:34.413 "buf_cache_size": 4294967295, 00:21:34.413 "dif_insert_or_strip": false, 00:21:34.413 "zcopy": false, 00:21:34.413 "c2h_success": false, 00:21:34.413 "sock_priority": 0, 00:21:34.413 "abort_timeout_sec": 1, 00:21:34.413 "ack_timeout": 0, 00:21:34.413 "data_wr_pool_size": 0 00:21:34.413 } 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "method": "nvmf_create_subsystem", 00:21:34.413 "params": { 00:21:34.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.413 "allow_any_host": false, 00:21:34.413 "serial_number": "00000000000000000000", 00:21:34.413 "model_number": "SPDK bdev Controller", 00:21:34.413 "max_namespaces": 32, 00:21:34.413 "min_cntlid": 1, 00:21:34.413 "max_cntlid": 65519, 00:21:34.413 "ana_reporting": false 00:21:34.413 } 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "method": "nvmf_subsystem_add_host", 00:21:34.413 "params": { 00:21:34.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.413 "host": "nqn.2016-06.io.spdk:host1", 00:21:34.413 "psk": "key0" 00:21:34.413 } 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "method": "nvmf_subsystem_add_ns", 00:21:34.413 "params": { 00:21:34.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.413 "namespace": { 00:21:34.413 "nsid": 1, 00:21:34.413 "bdev_name": "malloc0", 00:21:34.413 "nguid": "B0103C2017FE46A7B70A534B7497B696", 00:21:34.413 "uuid": "b0103c20-17fe-46a7-b70a-534b7497b696", 00:21:34.413 "no_auto_visible": false 00:21:34.413 } 00:21:34.413 } 00:21:34.413 }, 00:21:34.413 { 00:21:34.413 "method": "nvmf_subsystem_add_listener", 00:21:34.413 "params": { 00:21:34.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.413 "listen_address": { 00:21:34.413 "trtype": "TCP", 00:21:34.413 "adrfam": "IPv4", 00:21:34.413 "traddr": "10.0.0.2", 00:21:34.413 "trsvcid": "4420" 00:21:34.413 }, 00:21:34.413 "secure_channel": true 00:21:34.413 } 00:21:34.413 } 00:21:34.413 ] 00:21:34.413 } 00:21:34.413 ] 00:21:34.413 }' 00:21:34.413 08:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:34.674 08:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:34.674 "subsystems": [ 00:21:34.674 { 00:21:34.674 "subsystem": "keyring", 00:21:34.674 "config": [ 00:21:34.674 { 00:21:34.674 "method": "keyring_file_add_key", 00:21:34.674 "params": { 00:21:34.674 "name": "key0", 00:21:34.674 "path": "/tmp/tmp.KTYPvbw9bK" 00:21:34.674 } 00:21:34.674 } 00:21:34.674 ] 00:21:34.674 }, 00:21:34.674 { 00:21:34.674 "subsystem": "iobuf", 00:21:34.674 "config": [ 00:21:34.674 { 00:21:34.674 "method": "iobuf_set_options", 00:21:34.674 "params": { 00:21:34.674 "small_pool_count": 8192, 00:21:34.674 "large_pool_count": 1024, 00:21:34.674 "small_bufsize": 8192, 00:21:34.674 "large_bufsize": 135168 00:21:34.674 } 00:21:34.674 } 00:21:34.674 ] 00:21:34.674 }, 00:21:34.674 { 00:21:34.674 "subsystem": "sock", 00:21:34.674 "config": [ 00:21:34.674 { 00:21:34.674 "method": "sock_impl_set_options", 00:21:34.674 "params": { 00:21:34.674 "impl_name": "posix", 00:21:34.674 "recv_buf_size": 2097152, 00:21:34.674 "send_buf_size": 2097152, 00:21:34.674 "enable_recv_pipe": true, 00:21:34.674 "enable_quickack": false, 00:21:34.674 "enable_placement_id": 0, 00:21:34.674 "enable_zerocopy_send_server": true, 00:21:34.674 "enable_zerocopy_send_client": false, 00:21:34.674 "zerocopy_threshold": 0, 00:21:34.674 "tls_version": 0, 00:21:34.674 "enable_ktls": false 00:21:34.674 } 00:21:34.674 }, 00:21:34.674 { 00:21:34.674 "method": "sock_impl_set_options", 00:21:34.674 "params": { 00:21:34.674 "impl_name": "ssl", 00:21:34.674 "recv_buf_size": 4096, 00:21:34.674 "send_buf_size": 4096, 00:21:34.674 "enable_recv_pipe": true, 00:21:34.674 "enable_quickack": false, 00:21:34.674 "enable_placement_id": 0, 00:21:34.674 "enable_zerocopy_send_server": true, 00:21:34.674 "enable_zerocopy_send_client": false, 00:21:34.674 "zerocopy_threshold": 0, 00:21:34.674 "tls_version": 0, 00:21:34.674 "enable_ktls": false 00:21:34.674 } 00:21:34.674 } 00:21:34.674 ] 00:21:34.674 }, 00:21:34.674 { 00:21:34.674 "subsystem": "vmd", 00:21:34.674 "config": [] 00:21:34.674 }, 00:21:34.674 { 00:21:34.674 "subsystem": "accel", 00:21:34.674 "config": [ 00:21:34.674 { 00:21:34.674 "method": "accel_set_options", 00:21:34.674 "params": { 00:21:34.674 "small_cache_size": 128, 00:21:34.674 "large_cache_size": 16, 00:21:34.674 "task_count": 2048, 00:21:34.674 "sequence_count": 2048, 00:21:34.674 "buf_count": 2048 00:21:34.674 } 00:21:34.674 } 00:21:34.674 ] 00:21:34.674 }, 00:21:34.674 { 00:21:34.674 "subsystem": "bdev", 00:21:34.674 "config": [ 00:21:34.674 { 00:21:34.674 "method": "bdev_set_options", 00:21:34.674 "params": { 00:21:34.674 "bdev_io_pool_size": 65535, 00:21:34.674 "bdev_io_cache_size": 256, 00:21:34.674 "bdev_auto_examine": true, 00:21:34.674 "iobuf_small_cache_size": 128, 00:21:34.674 "iobuf_large_cache_size": 16 00:21:34.674 } 00:21:34.674 }, 00:21:34.674 { 00:21:34.674 "method": "bdev_raid_set_options", 00:21:34.674 "params": { 00:21:34.674 "process_window_size_kb": 1024 00:21:34.674 } 00:21:34.674 }, 00:21:34.674 { 00:21:34.674 "method": "bdev_iscsi_set_options", 00:21:34.674 "params": { 00:21:34.674 "timeout_sec": 30 00:21:34.674 } 00:21:34.674 }, 00:21:34.674 { 00:21:34.674 "method": "bdev_nvme_set_options", 00:21:34.674 "params": { 00:21:34.674 "action_on_timeout": "none", 00:21:34.674 "timeout_us": 0, 00:21:34.674 "timeout_admin_us": 0, 00:21:34.674 "keep_alive_timeout_ms": 10000, 00:21:34.674 "arbitration_burst": 0, 00:21:34.674 "low_priority_weight": 0, 00:21:34.674 "medium_priority_weight": 0, 00:21:34.674 "high_priority_weight": 0, 00:21:34.674 "nvme_adminq_poll_period_us": 10000, 00:21:34.674 "nvme_ioq_poll_period_us": 0, 00:21:34.674 "io_queue_requests": 512, 00:21:34.674 "delay_cmd_submit": true, 00:21:34.674 "transport_retry_count": 4, 00:21:34.674 "bdev_retry_count": 3, 00:21:34.674 "transport_ack_timeout": 0, 00:21:34.674 "ctrlr_loss_timeout_sec": 0, 00:21:34.674 "reconnect_delay_sec": 0, 00:21:34.674 "fast_io_fail_timeout_sec": 0, 00:21:34.674 "disable_auto_failback": false, 00:21:34.674 "generate_uuids": false, 00:21:34.674 "transport_tos": 0, 00:21:34.674 "nvme_error_stat": false, 00:21:34.674 "rdma_srq_size": 0, 00:21:34.674 "io_path_stat": false, 00:21:34.674 "allow_accel_sequence": false, 00:21:34.674 "rdma_max_cq_size": 0, 00:21:34.674 "rdma_cm_event_timeout_ms": 0, 00:21:34.674 "dhchap_digests": [ 00:21:34.674 "sha256", 00:21:34.674 "sha384", 00:21:34.674 "sha512" 00:21:34.674 ], 00:21:34.674 "dhchap_dhgroups": [ 00:21:34.674 "null", 00:21:34.674 "ffdhe2048", 00:21:34.674 "ffdhe3072", 00:21:34.674 "ffdhe4096", 00:21:34.674 "ffdhe6144", 00:21:34.674 "ffdhe8192" 00:21:34.674 ] 00:21:34.674 } 00:21:34.674 }, 00:21:34.674 { 00:21:34.674 "method": "bdev_nvme_attach_controller", 00:21:34.674 "params": { 00:21:34.674 "name": "nvme0", 00:21:34.674 "trtype": "TCP", 00:21:34.674 "adrfam": "IPv4", 00:21:34.674 "traddr": "10.0.0.2", 00:21:34.674 "trsvcid": "4420", 00:21:34.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.674 "prchk_reftag": false, 00:21:34.674 "prchk_guard": false, 00:21:34.674 "ctrlr_loss_timeout_sec": 0, 00:21:34.674 "reconnect_delay_sec": 0, 00:21:34.674 "fast_io_fail_timeout_sec": 0, 00:21:34.674 "psk": "key0", 00:21:34.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.674 "hdgst": false, 00:21:34.674 "ddgst": false 00:21:34.674 } 00:21:34.674 }, 00:21:34.674 { 00:21:34.674 "method": "bdev_nvme_set_hotplug", 00:21:34.674 "params": { 00:21:34.674 "period_us": 100000, 00:21:34.674 "enable": false 00:21:34.674 } 00:21:34.674 }, 00:21:34.674 { 00:21:34.674 "method": "bdev_enable_histogram", 00:21:34.674 "params": { 00:21:34.674 "name": "nvme0n1", 00:21:34.674 "enable": true 00:21:34.674 } 00:21:34.674 }, 00:21:34.674 { 00:21:34.674 "method": "bdev_wait_for_examine" 00:21:34.674 } 00:21:34.674 ] 00:21:34.674 }, 00:21:34.674 { 00:21:34.674 "subsystem": "nbd", 00:21:34.674 "config": [] 00:21:34.674 } 00:21:34.674 ] 00:21:34.674 }' 00:21:34.674 08:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 330363 00:21:34.674 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 330363 ']' 00:21:34.674 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 330363 00:21:34.674 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:34.674 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:34.674 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 330363 00:21:34.674 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:34.674 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:34.674 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 330363' 00:21:34.674 killing process with pid 330363 00:21:34.674 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 330363 00:21:34.674 Received shutdown signal, test time was about 1.000000 seconds 00:21:34.674 00:21:34.674 Latency(us) 00:21:34.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.674 =================================================================================================================== 00:21:34.674 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.674 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 330363 00:21:34.934 08:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 330255 00:21:34.934 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 330255 ']' 00:21:34.934 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 330255 00:21:34.934 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:34.934 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:34.934 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 330255 00:21:34.934 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:34.934 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:34.934 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 330255' 00:21:34.935 killing process with pid 330255 00:21:34.935 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 330255 00:21:34.935 [2024-05-15 08:33:21.762128] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:34.935 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 330255 00:21:35.195 08:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:35.195 08:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.195 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:35.195 08:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:35.195 "subsystems": [ 00:21:35.195 { 00:21:35.195 "subsystem": "keyring", 00:21:35.195 "config": [ 00:21:35.195 { 00:21:35.195 "method": "keyring_file_add_key", 00:21:35.195 "params": { 00:21:35.195 "name": "key0", 00:21:35.195 "path": "/tmp/tmp.KTYPvbw9bK" 00:21:35.195 } 00:21:35.195 } 00:21:35.195 ] 00:21:35.195 }, 00:21:35.195 { 00:21:35.195 "subsystem": "iobuf", 00:21:35.195 "config": [ 00:21:35.195 { 00:21:35.195 "method": "iobuf_set_options", 00:21:35.195 "params": { 00:21:35.195 "small_pool_count": 8192, 00:21:35.195 "large_pool_count": 1024, 00:21:35.195 "small_bufsize": 8192, 00:21:35.195 "large_bufsize": 135168 00:21:35.195 } 00:21:35.195 } 00:21:35.195 ] 00:21:35.195 }, 00:21:35.195 { 00:21:35.195 "subsystem": "sock", 00:21:35.195 "config": [ 00:21:35.195 { 00:21:35.195 "method": "sock_impl_set_options", 00:21:35.195 "params": { 00:21:35.195 "impl_name": "posix", 00:21:35.195 "recv_buf_size": 2097152, 00:21:35.195 "send_buf_size": 2097152, 00:21:35.195 "enable_recv_pipe": true, 00:21:35.195 "enable_quickack": false, 00:21:35.195 "enable_placement_id": 0, 00:21:35.195 "enable_zerocopy_send_server": true, 00:21:35.195 "enable_zerocopy_send_client": false, 00:21:35.195 "zerocopy_threshold": 0, 00:21:35.195 "tls_version": 0, 00:21:35.195 "enable_ktls": false 00:21:35.195 } 00:21:35.195 }, 00:21:35.195 { 00:21:35.195 "method": "sock_impl_set_options", 00:21:35.195 "params": { 00:21:35.195 "impl_name": "ssl", 00:21:35.195 "recv_buf_size": 4096, 00:21:35.195 "send_buf_size": 4096, 00:21:35.195 "enable_recv_pipe": true, 00:21:35.195 "enable_quickack": false, 00:21:35.195 "enable_placement_id": 0, 00:21:35.195 "enable_zerocopy_send_server": true, 00:21:35.195 "enable_zerocopy_send_client": false, 00:21:35.195 "zerocopy_threshold": 0, 00:21:35.195 "tls_version": 0, 00:21:35.195 "enable_ktls": false 00:21:35.195 } 00:21:35.195 } 00:21:35.195 ] 00:21:35.195 }, 00:21:35.195 { 00:21:35.195 "subsystem": "vmd", 00:21:35.195 "config": [] 00:21:35.195 }, 00:21:35.195 { 00:21:35.195 "subsystem": "accel", 00:21:35.195 "config": [ 00:21:35.195 { 00:21:35.195 "method": "accel_set_options", 00:21:35.195 "params": { 00:21:35.195 "small_cache_size": 128, 00:21:35.195 "large_cache_size": 16, 00:21:35.195 "task_count": 2048, 00:21:35.195 "sequence_count": 2048, 00:21:35.195 "buf_count": 2048 00:21:35.195 } 00:21:35.195 } 00:21:35.195 ] 00:21:35.195 }, 00:21:35.195 { 00:21:35.195 "subsystem": "bdev", 00:21:35.195 "config": [ 00:21:35.195 { 00:21:35.195 "method": "bdev_set_options", 00:21:35.195 "params": { 00:21:35.195 "bdev_io_pool_size": 65535, 00:21:35.195 "bdev_io_cache_size": 256, 00:21:35.195 "bdev_auto_examine": true, 00:21:35.195 "iobuf_small_cache_size": 128, 00:21:35.195 "iobuf_large_cache_size": 16 00:21:35.195 } 00:21:35.195 }, 00:21:35.195 { 00:21:35.195 "method": "bdev_raid_set_options", 00:21:35.195 "params": { 00:21:35.195 "process_window_size_kb": 1024 00:21:35.195 } 00:21:35.195 }, 00:21:35.195 { 00:21:35.195 "method": "bdev_iscsi_set_options", 00:21:35.195 "params": { 00:21:35.195 "timeout_sec": 30 00:21:35.195 } 00:21:35.195 }, 00:21:35.195 { 00:21:35.195 "method": "bdev_nvme_set_options", 00:21:35.195 "params": { 00:21:35.195 "action_on_timeout": "none", 00:21:35.195 "timeout_us": 0, 00:21:35.195 "timeout_admin_us": 0, 00:21:35.195 "keep_alive_timeout_ms": 10000, 00:21:35.195 "arbitration_burst": 0, 00:21:35.195 "low_priority_weight": 0, 00:21:35.195 "medium_priority_weight": 0, 00:21:35.195 "high_priority_weight": 0, 00:21:35.195 "nvme_adminq_poll_period_us": 10000, 00:21:35.195 "nvme_ioq_poll_period_us": 0, 00:21:35.195 "io_queue_requests": 0, 00:21:35.195 "delay_cmd_submit": true, 00:21:35.195 "transport_retry_count": 4, 00:21:35.195 "bdev_retry_count": 3, 00:21:35.195 "transport_ack_timeout": 0, 00:21:35.195 "ctrlr_loss_timeout_sec": 0, 00:21:35.196 "reconnect_delay_sec": 0, 00:21:35.196 "fast_io_fail_timeout_sec": 0, 00:21:35.196 "disable_auto_failback": false, 00:21:35.196 "generate_uuids": false, 00:21:35.196 "transport_tos": 0, 00:21:35.196 "nvme_error_stat": false, 00:21:35.196 "rdma_srq_size": 0, 00:21:35.196 "io_path_stat": false, 00:21:35.196 "allow_accel_sequence": false, 00:21:35.196 "rdma_max_cq_size": 0, 00:21:35.196 "rdma_cm_event_timeout_ms": 0, 00:21:35.196 "dhchap_digests": [ 00:21:35.196 "sha256", 00:21:35.196 "sha384", 00:21:35.196 "sha512" 00:21:35.196 ], 00:21:35.196 "dhchap_dhgroups": [ 00:21:35.196 "null", 00:21:35.196 "ffdhe2048", 00:21:35.196 "ffdhe3072", 00:21:35.196 "ffdhe4096", 00:21:35.196 "ffdhe6144", 00:21:35.196 "ffdhe8192" 00:21:35.196 ] 00:21:35.196 } 00:21:35.196 }, 00:21:35.196 { 00:21:35.196 "method": "bdev_nvme_set_hotplug", 00:21:35.196 "params": { 00:21:35.196 "period_us": 100000, 00:21:35.196 "enable": false 00:21:35.196 } 00:21:35.196 }, 00:21:35.196 { 00:21:35.196 "method": "bdev_malloc_create", 00:21:35.196 "params": { 00:21:35.196 "name": "malloc0", 00:21:35.196 "num_blocks": 8192, 00:21:35.196 "block_size": 4096, 00:21:35.196 "physical_block_size": 4096, 00:21:35.196 "uuid": "b0103c20-17fe-46a7-b70a-534b7497b696", 00:21:35.196 "optimal_io_boundary": 0 00:21:35.196 } 00:21:35.196 }, 00:21:35.196 { 00:21:35.196 "method": "bdev_wait_for_examine" 00:21:35.196 } 00:21:35.196 ] 00:21:35.196 }, 00:21:35.196 { 00:21:35.196 "subsystem": "nbd", 00:21:35.196 "config": [] 00:21:35.196 }, 00:21:35.196 { 00:21:35.196 "subsystem": "scheduler", 00:21:35.196 "config": [ 00:21:35.196 { 00:21:35.196 "method": "framework_set_scheduler", 00:21:35.196 "params": { 00:21:35.196 "name": "static" 00:21:35.196 } 00:21:35.196 } 00:21:35.196 ] 00:21:35.196 }, 00:21:35.196 { 00:21:35.196 "subsystem": "nvmf", 00:21:35.196 "config": [ 00:21:35.196 { 00:21:35.196 "method": "nvmf_set_config", 00:21:35.196 "params": { 00:21:35.196 "discovery_filter": "match_any", 00:21:35.196 "admin_cmd_passthru": { 00:21:35.196 "identify_ctrlr": false 00:21:35.196 } 00:21:35.196 } 00:21:35.196 }, 00:21:35.196 { 00:21:35.196 "method": "nvmf_set_max_subsystems", 00:21:35.196 "params": { 00:21:35.196 "max_subsystems": 1024 00:21:35.196 } 00:21:35.196 }, 00:21:35.196 { 00:21:35.196 "method": "nvmf_set_crdt", 00:21:35.196 "params": { 00:21:35.196 "crdt1": 0, 00:21:35.196 "crdt2": 0, 00:21:35.196 "crdt3": 0 00:21:35.196 } 00:21:35.196 }, 00:21:35.196 { 00:21:35.196 "method": "nvmf_create_transport", 00:21:35.196 "params": { 00:21:35.196 "trtype": "TCP", 00:21:35.196 "max_queue_depth": 128, 00:21:35.196 "max_io_qpairs_per_ctrlr": 127, 00:21:35.196 "in_capsule_data_size": 4096, 00:21:35.196 "max_io_size": 131072, 00:21:35.196 "io_unit_size": 131072, 00:21:35.196 "max_aq_depth": 128, 00:21:35.196 "num_shared_buffers": 511, 00:21:35.196 "buf_cache_size": 4294967295, 00:21:35.196 "dif_insert_or_strip": false, 00:21:35.196 "zcopy": false, 00:21:35.196 "c2h_success": false, 00:21:35.196 "sock_priority": 0, 00:21:35.196 "abort_timeout_sec": 1, 00:21:35.196 "ack_timeout": 0, 00:21:35.196 "data_wr_pool_size": 0 00:21:35.196 } 00:21:35.196 }, 00:21:35.196 { 00:21:35.196 "method": "nvmf_create_subsystem", 00:21:35.196 "params": { 00:21:35.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.196 "allow_any_host": false, 00:21:35.196 "serial_number": "00000000000000000000", 00:21:35.196 "model_number": "SPDK bdev Controller", 00:21:35.196 "max_namespaces": 32, 00:21:35.196 "min_cntlid": 1, 00:21:35.196 "max_cntlid": 65519, 00:21:35.196 "ana_reporting": false 00:21:35.196 } 00:21:35.196 }, 00:21:35.196 { 00:21:35.196 "method": "nvmf_subsystem_add_host", 00:21:35.196 "params": { 00:21:35.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.196 "host": "nqn.2016-06.io.spdk:host1", 00:21:35.196 "psk": "key0" 00:21:35.196 } 00:21:35.196 }, 00:21:35.196 { 00:21:35.196 "method": "nvmf_subsystem_add_ns", 00:21:35.196 "params": { 00:21:35.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.196 "namespace": { 00:21:35.196 "nsid": 1, 00:21:35.196 "bdev_name": "malloc0", 00:21:35.196 "nguid": "B0103C2017FE46A7B70A534B7497B696", 00:21:35.196 "uuid": "b0103c20-17fe-46a7-b70a-534b7497b696", 00:21:35.196 "no_auto_visible": false 00:21:35.196 } 00:21:35.196 } 00:21:35.196 }, 00:21:35.196 { 00:21:35.196 "method": "nvmf_subsystem_add_listener", 00:21:35.196 "params": { 00:21:35.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.196 "listen_address": { 00:21:35.196 "trtype": "TCP", 00:21:35.196 "adrfam": "IPv4", 00:21:35.196 "traddr": "10.0.0.2", 00:21:35.196 "trsvcid": "4420" 00:21:35.196 }, 00:21:35.196 "secure_channel": true 00:21:35.196 } 00:21:35.196 } 00:21:35.196 ] 00:21:35.196 } 00:21:35.196 ] 00:21:35.196 }' 00:21:35.196 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.196 08:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=330983 00:21:35.196 08:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 330983 00:21:35.196 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 330983 ']' 00:21:35.196 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.196 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:35.196 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.196 08:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:35.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.196 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:35.196 08:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.196 [2024-05-15 08:33:22.034434] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:21:35.196 [2024-05-15 08:33:22.034480] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.196 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.196 [2024-05-15 08:33:22.090002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.196 [2024-05-15 08:33:22.167014] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.196 [2024-05-15 08:33:22.167048] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.196 [2024-05-15 08:33:22.167055] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.196 [2024-05-15 08:33:22.167060] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.196 [2024-05-15 08:33:22.167065] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.196 [2024-05-15 08:33:22.167130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.458 [2024-05-15 08:33:22.369857] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.458 [2024-05-15 08:33:22.401870] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:35.458 [2024-05-15 08:33:22.401912] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:35.458 [2024-05-15 08:33:22.409474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.028 08:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:36.028 08:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:36.028 08:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:36.028 08:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:36.028 08:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.028 08:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.028 08:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=331014 00:21:36.028 08:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 331014 /var/tmp/bdevperf.sock 00:21:36.028 08:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 331014 ']' 00:21:36.028 08:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.028 08:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:36.028 08:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:36.028 08:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.028 08:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:36.028 "subsystems": [ 00:21:36.028 { 00:21:36.028 "subsystem": "keyring", 00:21:36.028 "config": [ 00:21:36.028 { 00:21:36.028 "method": "keyring_file_add_key", 00:21:36.028 "params": { 00:21:36.028 "name": "key0", 00:21:36.028 "path": "/tmp/tmp.KTYPvbw9bK" 00:21:36.028 } 00:21:36.028 } 00:21:36.028 ] 00:21:36.028 }, 00:21:36.028 { 00:21:36.028 "subsystem": "iobuf", 00:21:36.028 "config": [ 00:21:36.028 { 00:21:36.028 "method": "iobuf_set_options", 00:21:36.028 "params": { 00:21:36.028 "small_pool_count": 8192, 00:21:36.028 "large_pool_count": 1024, 00:21:36.028 "small_bufsize": 8192, 00:21:36.028 "large_bufsize": 135168 00:21:36.028 } 00:21:36.028 } 00:21:36.028 ] 00:21:36.028 }, 00:21:36.028 { 00:21:36.028 "subsystem": "sock", 00:21:36.028 "config": [ 00:21:36.028 { 00:21:36.028 "method": "sock_impl_set_options", 00:21:36.028 "params": { 00:21:36.028 "impl_name": "posix", 00:21:36.028 "recv_buf_size": 2097152, 00:21:36.028 "send_buf_size": 2097152, 00:21:36.028 "enable_recv_pipe": true, 00:21:36.028 "enable_quickack": false, 00:21:36.028 "enable_placement_id": 0, 00:21:36.028 "enable_zerocopy_send_server": true, 00:21:36.028 "enable_zerocopy_send_client": false, 00:21:36.028 "zerocopy_threshold": 0, 00:21:36.028 "tls_version": 0, 00:21:36.028 "enable_ktls": false 00:21:36.028 } 00:21:36.028 }, 00:21:36.028 { 00:21:36.028 "method": "sock_impl_set_options", 00:21:36.028 "params": { 00:21:36.028 "impl_name": "ssl", 00:21:36.028 "recv_buf_size": 4096, 00:21:36.028 "send_buf_size": 4096, 00:21:36.028 "enable_recv_pipe": true, 00:21:36.028 "enable_quickack": false, 00:21:36.028 "enable_placement_id": 0, 00:21:36.028 "enable_zerocopy_send_server": true, 00:21:36.028 "enable_zerocopy_send_client": false, 00:21:36.028 "zerocopy_threshold": 0, 00:21:36.028 "tls_version": 0, 00:21:36.028 "enable_ktls": false 00:21:36.028 } 00:21:36.028 } 00:21:36.028 ] 00:21:36.028 }, 00:21:36.028 { 00:21:36.028 "subsystem": "vmd", 00:21:36.028 "config": [] 00:21:36.028 }, 00:21:36.028 { 00:21:36.028 "subsystem": "accel", 00:21:36.028 "config": [ 00:21:36.028 { 00:21:36.028 "method": "accel_set_options", 00:21:36.028 "params": { 00:21:36.028 "small_cache_size": 128, 00:21:36.028 "large_cache_size": 16, 00:21:36.028 "task_count": 2048, 00:21:36.028 "sequence_count": 2048, 00:21:36.028 "buf_count": 2048 00:21:36.028 } 00:21:36.028 } 00:21:36.028 ] 00:21:36.028 }, 00:21:36.028 { 00:21:36.028 "subsystem": "bdev", 00:21:36.028 "config": [ 00:21:36.028 { 00:21:36.028 "method": "bdev_set_options", 00:21:36.028 "params": { 00:21:36.028 "bdev_io_pool_size": 65535, 00:21:36.028 "bdev_io_cache_size": 256, 00:21:36.028 "bdev_auto_examine": true, 00:21:36.028 "iobuf_small_cache_size": 128, 00:21:36.028 "iobuf_large_cache_size": 16 00:21:36.028 } 00:21:36.028 }, 00:21:36.028 { 00:21:36.028 "method": "bdev_raid_set_options", 00:21:36.028 "params": { 00:21:36.028 "process_window_size_kb": 1024 00:21:36.028 } 00:21:36.028 }, 00:21:36.028 { 00:21:36.028 "method": "bdev_iscsi_set_options", 00:21:36.028 "params": { 00:21:36.028 "timeout_sec": 30 00:21:36.028 } 00:21:36.028 }, 00:21:36.028 { 00:21:36.028 "method": "bdev_nvme_set_options", 00:21:36.028 "params": { 00:21:36.028 "action_on_timeout": "none", 00:21:36.028 "timeout_us": 0, 00:21:36.028 "timeout_admin_us": 0, 00:21:36.028 "keep_alive_timeout_ms": 10000, 00:21:36.028 "arbitration_burst": 0, 00:21:36.028 "low_priority_weight": 0, 00:21:36.028 "medium_priority_weight": 0, 00:21:36.028 "high_priority_weight": 0, 00:21:36.028 "nvme_adminq_poll_period_us": 10000, 00:21:36.028 "nvme_ioq_poll_period_us": 0, 00:21:36.028 "io_queue_requests": 512, 00:21:36.028 "delay_cmd_submit": true, 00:21:36.028 "transport_retry_count": 4, 00:21:36.028 "bdev_retry_count": 3, 00:21:36.028 "transport_ack_timeout": 0, 00:21:36.028 "ctrlr_loss_timeout_sec": 0, 00:21:36.028 "reconnect_delay_sec": 0, 00:21:36.028 "fast_io_fail_timeout_sec": 0, 00:21:36.028 "disable_auto_failback": false, 00:21:36.028 "generate_uuids": false, 00:21:36.028 "transport_tos": 0, 00:21:36.028 "nvme_error_stat": false, 00:21:36.028 "rdma_srq_size": 0, 00:21:36.028 "io_path_stat": false, 00:21:36.028 "allow_accel_sequence": false, 00:21:36.028 "rdma_max_cq_size": 0, 00:21:36.028 "rdma_cm_event_timeout_ms": 0, 00:21:36.028 "dhchap_digests": [ 00:21:36.028 "sha256", 00:21:36.028 "sha384", 00:21:36.028 "sha512" 00:21:36.028 ], 00:21:36.028 "dhchap_dhgroups": [ 00:21:36.028 "null", 00:21:36.028 "ffdhe2048", 00:21:36.028 "ffdhe3072", 00:21:36.028 "ffdhe4096", 00:21:36.028 "ffdhe6144", 00:21:36.028 "ffdhe8192" 00:21:36.028 ] 00:21:36.028 } 00:21:36.028 }, 00:21:36.028 { 00:21:36.028 "method": "bdev_nvme_attach_controller", 00:21:36.028 "params": { 00:21:36.028 "name": "nvme0", 00:21:36.028 "trtype": "TCP", 00:21:36.028 "adrfam": "IPv4", 00:21:36.028 "traddr": "10.0.0.2", 00:21:36.028 "trsvcid": "4420", 00:21:36.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.028 "prchk_reftag": false, 00:21:36.028 "prchk_guard": false, 00:21:36.028 "ctrlr_loss_timeout_sec": 0, 00:21:36.028 "reconnect_delay_sec": 0, 00:21:36.028 "fast_io_fail_timeout_sec": 0, 00:21:36.028 "psk": "key0", 00:21:36.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:36.028 "hdgst": false, 00:21:36.029 "ddgst": false 00:21:36.029 } 00:21:36.029 }, 00:21:36.029 { 00:21:36.029 "method": "bdev_nvme_set_hotplug", 00:21:36.029 "params": { 00:21:36.029 "period_us": 100000, 00:21:36.029 "enable": false 00:21:36.029 } 00:21:36.029 }, 00:21:36.029 { 00:21:36.029 "method": "bdev_enable_histogram", 00:21:36.029 "params": { 00:21:36.029 "name": "nvme0n1", 00:21:36.029 "enable": true 00:21:36.029 } 00:21:36.029 }, 00:21:36.029 { 00:21:36.029 "method": "bdev_wait_for_examine" 00:21:36.029 } 00:21:36.029 ] 00:21:36.029 }, 00:21:36.029 { 00:21:36.029 "subsystem": "nbd", 00:21:36.029 "config": [] 00:21:36.029 } 00:21:36.029 ] 00:21:36.029 }' 00:21:36.029 08:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:36.029 08:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.029 [2024-05-15 08:33:22.882906] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:21:36.029 [2024-05-15 08:33:22.882956] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331014 ] 00:21:36.029 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.029 [2024-05-15 08:33:22.938274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.029 [2024-05-15 08:33:23.010266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.291 [2024-05-15 08:33:23.152602] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:36.861 08:33:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:36.861 08:33:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:36.861 08:33:23 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.861 08:33:23 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:36.861 08:33:23 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.861 08:33:23 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:37.121 Running I/O for 1 seconds... 00:21:38.059 00:21:38.059 Latency(us) 00:21:38.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.059 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:38.059 Verification LBA range: start 0x0 length 0x2000 00:21:38.059 nvme0n1 : 1.02 5223.93 20.41 0.00 0.00 24288.99 6781.55 26670.30 00:21:38.059 =================================================================================================================== 00:21:38.059 Total : 5223.93 20.41 0.00 0.00 24288.99 6781.55 26670.30 00:21:38.059 0 00:21:38.059 08:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:38.059 08:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:38.059 08:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:38.059 08:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:21:38.059 08:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:21:38.059 08:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:21:38.059 08:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:38.059 08:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:21:38.059 08:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:21:38.059 08:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:21:38.059 08:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:38.059 nvmf_trace.0 00:21:38.059 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:21:38.059 08:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 331014 00:21:38.059 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 331014 ']' 00:21:38.059 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 331014 00:21:38.059 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:38.059 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:38.059 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 331014 00:21:38.318 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:38.318 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:38.318 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 331014' 00:21:38.318 killing process with pid 331014 00:21:38.318 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 331014 00:21:38.318 Received shutdown signal, test time was about 1.000000 seconds 00:21:38.318 00:21:38.318 Latency(us) 00:21:38.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.318 =================================================================================================================== 00:21:38.318 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:38.318 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 331014 00:21:38.318 08:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:38.318 08:33:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:38.318 08:33:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:38.318 08:33:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:38.318 08:33:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:38.318 08:33:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:38.318 08:33:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:38.318 rmmod nvme_tcp 00:21:38.577 rmmod nvme_fabrics 00:21:38.577 rmmod nvme_keyring 00:21:38.577 08:33:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:38.577 08:33:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:38.577 08:33:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:38.577 08:33:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 330983 ']' 00:21:38.577 08:33:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 330983 00:21:38.577 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 330983 ']' 00:21:38.577 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 330983 00:21:38.577 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:38.577 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:38.577 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 330983 00:21:38.577 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:38.577 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:38.577 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 330983' 00:21:38.577 killing process with pid 330983 00:21:38.577 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 330983 00:21:38.577 [2024-05-15 08:33:25.427408] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:38.577 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 330983 00:21:38.837 08:33:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:38.837 08:33:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:38.837 08:33:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:38.837 08:33:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:38.837 08:33:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:38.837 08:33:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.837 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.837 08:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.746 08:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:40.746 08:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.uQVgHq5ybJ /tmp/tmp.9Tf2g3sEXs /tmp/tmp.KTYPvbw9bK 00:21:40.746 00:21:40.746 real 1m22.232s 00:21:40.746 user 2m7.401s 00:21:40.746 sys 0m27.054s 00:21:40.746 08:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:40.746 08:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.746 ************************************ 00:21:40.746 END TEST nvmf_tls 00:21:40.746 ************************************ 00:21:40.746 08:33:27 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:40.746 08:33:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:40.746 08:33:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:40.746 08:33:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:41.007 ************************************ 00:21:41.007 START TEST nvmf_fips 00:21:41.007 ************************************ 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:41.007 * Looking for test storage... 00:21:41.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:41.007 08:33:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:41.008 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:41.008 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:41.008 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:41.008 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:41.008 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:41.008 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:41.008 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:41.008 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:41.008 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:41.008 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:41.008 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:41.008 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:41.008 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:41.008 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:41.008 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:41.008 08:33:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:41.267 08:33:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:41.267 08:33:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:41.268 Error setting digest 00:21:41.268 00226E82A27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:41.268 00226E82A27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:41.268 08:33:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:46.538 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:46.538 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:46.538 Found net devices under 0000:86:00.0: cvl_0_0 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:46.538 Found net devices under 0000:86:00.1: cvl_0_1 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.538 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.539 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:46.539 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:46.539 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.539 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.539 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.539 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.539 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:46.539 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.539 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.539 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.539 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:46.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:21:46.539 00:21:46.539 --- 10.0.0.2 ping statistics --- 00:21:46.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.539 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:21:46.539 08:33:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:21:46.539 00:21:46.539 --- 10.0.0.1 ping statistics --- 00:21:46.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.539 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=335013 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 335013 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 335013 ']' 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:46.539 08:33:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:46.539 [2024-05-15 08:33:33.111302] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:21:46.539 [2024-05-15 08:33:33.111349] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.539 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.539 [2024-05-15 08:33:33.167144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.539 [2024-05-15 08:33:33.243202] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.539 [2024-05-15 08:33:33.243233] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.539 [2024-05-15 08:33:33.243240] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.539 [2024-05-15 08:33:33.243245] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.539 [2024-05-15 08:33:33.243250] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.539 [2024-05-15 08:33:33.243265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.107 08:33:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:47.107 08:33:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:21:47.107 08:33:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:47.107 08:33:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:47.107 08:33:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:47.107 08:33:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.107 08:33:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:47.107 08:33:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:47.107 08:33:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:47.108 08:33:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:47.108 08:33:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:47.108 08:33:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:47.108 08:33:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:47.108 08:33:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:47.108 [2024-05-15 08:33:34.066112] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.108 [2024-05-15 08:33:34.082098] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:47.108 [2024-05-15 08:33:34.082140] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:47.108 [2024-05-15 08:33:34.082304] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.108 [2024-05-15 08:33:34.110195] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:47.108 malloc0 00:21:47.367 08:33:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:47.367 08:33:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=335104 00:21:47.367 08:33:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 335104 /var/tmp/bdevperf.sock 00:21:47.367 08:33:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 335104 ']' 00:21:47.367 08:33:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:47.367 08:33:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:47.367 08:33:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:47.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:47.367 08:33:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:47.367 08:33:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:47.367 08:33:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:47.367 [2024-05-15 08:33:34.199161] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:21:47.367 [2024-05-15 08:33:34.199215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335104 ] 00:21:47.367 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.367 [2024-05-15 08:33:34.248502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.367 [2024-05-15 08:33:34.325749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.303 08:33:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:48.303 08:33:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:21:48.303 08:33:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:48.303 [2024-05-15 08:33:35.143292] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.303 [2024-05-15 08:33:35.143358] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:48.303 TLSTESTn1 00:21:48.303 08:33:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:48.303 Running I/O for 10 seconds... 00:22:00.514 00:22:00.514 Latency(us) 00:22:00.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.514 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:00.514 Verification LBA range: start 0x0 length 0x2000 00:22:00.514 TLSTESTn1 : 10.02 4883.40 19.08 0.00 0.00 26172.06 6639.08 46274.11 00:22:00.514 =================================================================================================================== 00:22:00.514 Total : 4883.40 19.08 0.00 0.00 26172.06 6639.08 46274.11 00:22:00.514 0 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:00.514 nvmf_trace.0 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 335104 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 335104 ']' 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 335104 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 335104 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 335104' 00:22:00.514 killing process with pid 335104 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 335104 00:22:00.514 Received shutdown signal, test time was about 10.000000 seconds 00:22:00.514 00:22:00.514 Latency(us) 00:22:00.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.514 =================================================================================================================== 00:22:00.514 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:00.514 [2024-05-15 08:33:45.516370] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 335104 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:00.514 08:33:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:00.515 08:33:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:00.515 08:33:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:00.515 08:33:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:00.515 rmmod nvme_tcp 00:22:00.515 rmmod nvme_fabrics 00:22:00.515 rmmod nvme_keyring 00:22:00.515 08:33:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:00.515 08:33:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:00.515 08:33:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:00.515 08:33:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 335013 ']' 00:22:00.515 08:33:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 335013 00:22:00.515 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 335013 ']' 00:22:00.515 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 335013 00:22:00.515 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:22:00.515 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:00.515 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 335013 00:22:00.515 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:00.515 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:00.515 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 335013' 00:22:00.515 killing process with pid 335013 00:22:00.515 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 335013 00:22:00.515 [2024-05-15 08:33:45.838516] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:00.515 [2024-05-15 08:33:45.838545] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:00.515 08:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 335013 00:22:00.515 08:33:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:00.515 08:33:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:00.515 08:33:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:00.515 08:33:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:00.515 08:33:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:00.515 08:33:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.515 08:33:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.515 08:33:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.100 08:33:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:01.100 08:33:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:01.100 00:22:01.100 real 0m20.337s 00:22:01.100 user 0m22.213s 00:22:01.100 sys 0m8.872s 00:22:01.100 08:33:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:01.100 08:33:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:01.100 ************************************ 00:22:01.100 END TEST nvmf_fips 00:22:01.100 ************************************ 00:22:01.359 08:33:48 nvmf_tcp -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:22:01.359 08:33:48 nvmf_tcp -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:22:01.359 08:33:48 nvmf_tcp -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:22:01.359 08:33:48 nvmf_tcp -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:22:01.359 08:33:48 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:22:01.359 08:33:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:06.640 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:06.640 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.640 08:33:53 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:06.641 Found net devices under 0000:86:00.0: cvl_0_0 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:06.641 Found net devices under 0000:86:00.1: cvl_0_1 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:22:06.641 08:33:53 nvmf_tcp -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:06.641 08:33:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:06.641 08:33:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:06.641 08:33:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:06.641 ************************************ 00:22:06.641 START TEST nvmf_perf_adq 00:22:06.641 ************************************ 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:06.641 * Looking for test storage... 00:22:06.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:06.641 08:33:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.919 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.919 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:11.919 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:11.919 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:11.919 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:11.919 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:11.919 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:11.919 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:11.920 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:11.920 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:11.920 Found net devices under 0000:86:00.0: cvl_0_0 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:11.920 Found net devices under 0000:86:00.1: cvl_0_1 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:11.920 08:33:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:13.301 08:33:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:16.595 08:34:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:21.878 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:21.878 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:21.878 Found net devices under 0000:86:00.0: cvl_0_0 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:21.878 Found net devices under 0000:86:00.1: cvl_0_1 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:21.878 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:21.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:22:21.879 00:22:21.879 --- 10.0.0.2 ping statistics --- 00:22:21.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.879 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:22:21.879 00:22:21.879 --- 10.0.0.1 ping statistics --- 00:22:21.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.879 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=345182 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 345182 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 345182 ']' 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:21.879 08:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.879 [2024-05-15 08:34:08.376623] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:22:21.879 [2024-05-15 08:34:08.376666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.879 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.879 [2024-05-15 08:34:08.433028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:21.879 [2024-05-15 08:34:08.505935] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.879 [2024-05-15 08:34:08.505975] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.879 [2024-05-15 08:34:08.505982] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.879 [2024-05-15 08:34:08.505988] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.879 [2024-05-15 08:34:08.505993] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.879 [2024-05-15 08:34:08.506084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.879 [2024-05-15 08:34:08.506200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.879 [2024-05-15 08:34:08.506259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.879 [2024-05-15 08:34:08.506261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.449 [2024-05-15 08:34:09.369512] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.449 Malloc1 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.449 [2024-05-15 08:34:09.420940] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:22.449 [2024-05-15 08:34:09.421198] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=345433 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:22.449 08:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:22.449 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.979 08:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:24.979 08:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.979 08:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.979 08:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.979 08:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:24.979 "tick_rate": 2300000000, 00:22:24.979 "poll_groups": [ 00:22:24.979 { 00:22:24.979 "name": "nvmf_tgt_poll_group_000", 00:22:24.979 "admin_qpairs": 1, 00:22:24.979 "io_qpairs": 1, 00:22:24.979 "current_admin_qpairs": 1, 00:22:24.979 "current_io_qpairs": 1, 00:22:24.979 "pending_bdev_io": 0, 00:22:24.979 "completed_nvme_io": 19046, 00:22:24.979 "transports": [ 00:22:24.979 { 00:22:24.979 "trtype": "TCP" 00:22:24.979 } 00:22:24.979 ] 00:22:24.979 }, 00:22:24.979 { 00:22:24.979 "name": "nvmf_tgt_poll_group_001", 00:22:24.979 "admin_qpairs": 0, 00:22:24.979 "io_qpairs": 1, 00:22:24.979 "current_admin_qpairs": 0, 00:22:24.979 "current_io_qpairs": 1, 00:22:24.979 "pending_bdev_io": 0, 00:22:24.979 "completed_nvme_io": 19488, 00:22:24.979 "transports": [ 00:22:24.979 { 00:22:24.979 "trtype": "TCP" 00:22:24.979 } 00:22:24.979 ] 00:22:24.979 }, 00:22:24.979 { 00:22:24.979 "name": "nvmf_tgt_poll_group_002", 00:22:24.979 "admin_qpairs": 0, 00:22:24.979 "io_qpairs": 1, 00:22:24.979 "current_admin_qpairs": 0, 00:22:24.979 "current_io_qpairs": 1, 00:22:24.979 "pending_bdev_io": 0, 00:22:24.979 "completed_nvme_io": 19260, 00:22:24.979 "transports": [ 00:22:24.979 { 00:22:24.979 "trtype": "TCP" 00:22:24.979 } 00:22:24.979 ] 00:22:24.979 }, 00:22:24.979 { 00:22:24.979 "name": "nvmf_tgt_poll_group_003", 00:22:24.979 "admin_qpairs": 0, 00:22:24.979 "io_qpairs": 1, 00:22:24.979 "current_admin_qpairs": 0, 00:22:24.979 "current_io_qpairs": 1, 00:22:24.979 "pending_bdev_io": 0, 00:22:24.979 "completed_nvme_io": 19124, 00:22:24.979 "transports": [ 00:22:24.979 { 00:22:24.979 "trtype": "TCP" 00:22:24.979 } 00:22:24.979 ] 00:22:24.979 } 00:22:24.979 ] 00:22:24.979 }' 00:22:24.979 08:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:24.979 08:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:24.980 08:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:24.980 08:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:24.980 08:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 345433 00:22:33.088 Initializing NVMe Controllers 00:22:33.088 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:33.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:33.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:33.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:33.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:33.088 Initialization complete. Launching workers. 00:22:33.088 ======================================================== 00:22:33.088 Latency(us) 00:22:33.088 Device Information : IOPS MiB/s Average min max 00:22:33.088 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10079.10 39.37 6351.46 2401.55 10693.62 00:22:33.088 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10334.60 40.37 6193.32 2216.59 10813.75 00:22:33.088 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10177.40 39.76 6287.78 2200.91 10804.56 00:22:33.088 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10096.90 39.44 6338.62 2199.90 11650.25 00:22:33.088 ======================================================== 00:22:33.088 Total : 40688.00 158.94 6292.18 2199.90 11650.25 00:22:33.088 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:33.088 rmmod nvme_tcp 00:22:33.088 rmmod nvme_fabrics 00:22:33.088 rmmod nvme_keyring 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 345182 ']' 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 345182 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 345182 ']' 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 345182 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 345182 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 345182' 00:22:33.088 killing process with pid 345182 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 345182 00:22:33.088 [2024-05-15 08:34:19.700887] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 345182 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.088 08:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.998 08:34:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:34.999 08:34:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:34.999 08:34:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:36.379 08:34:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:38.288 08:34:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:43.568 08:34:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:43.568 08:34:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:43.568 08:34:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.568 08:34:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:43.568 08:34:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:43.568 08:34:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:43.568 08:34:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.568 08:34:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.568 08:34:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.568 08:34:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:43.568 08:34:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:43.568 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:43.568 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:43.568 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:43.569 Found net devices under 0000:86:00.0: cvl_0_0 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:43.569 Found net devices under 0000:86:00.1: cvl_0_1 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:43.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:22:43.569 00:22:43.569 --- 10.0.0.2 ping statistics --- 00:22:43.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.569 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:22:43.569 00:22:43.569 --- 10.0.0.1 ping statistics --- 00:22:43.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.569 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:43.569 net.core.busy_poll = 1 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:43.569 net.core.busy_read = 1 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=349221 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 349221 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 349221 ']' 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:43.569 08:34:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:43.569 [2024-05-15 08:34:30.577400] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:22:43.569 [2024-05-15 08:34:30.577444] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.829 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.829 [2024-05-15 08:34:30.633180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:43.829 [2024-05-15 08:34:30.713341] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.829 [2024-05-15 08:34:30.713375] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.829 [2024-05-15 08:34:30.713382] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.829 [2024-05-15 08:34:30.713388] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.829 [2024-05-15 08:34:30.713393] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.829 [2024-05-15 08:34:30.713431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.829 [2024-05-15 08:34:30.713530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.829 [2024-05-15 08:34:30.713627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:43.829 [2024-05-15 08:34:30.713629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.398 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:44.398 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:22:44.398 08:34:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:44.398 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.398 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:44.658 [2024-05-15 08:34:31.566952] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:44.658 Malloc1 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:44.658 [2024-05-15 08:34:31.610398] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:44.658 [2024-05-15 08:34:31.610628] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=349442 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:44.658 08:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:44.658 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.190 08:34:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:47.190 08:34:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.190 08:34:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:47.190 08:34:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.190 08:34:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:47.190 "tick_rate": 2300000000, 00:22:47.190 "poll_groups": [ 00:22:47.190 { 00:22:47.190 "name": "nvmf_tgt_poll_group_000", 00:22:47.190 "admin_qpairs": 1, 00:22:47.190 "io_qpairs": 2, 00:22:47.190 "current_admin_qpairs": 1, 00:22:47.190 "current_io_qpairs": 2, 00:22:47.190 "pending_bdev_io": 0, 00:22:47.190 "completed_nvme_io": 27782, 00:22:47.190 "transports": [ 00:22:47.190 { 00:22:47.190 "trtype": "TCP" 00:22:47.190 } 00:22:47.190 ] 00:22:47.190 }, 00:22:47.190 { 00:22:47.190 "name": "nvmf_tgt_poll_group_001", 00:22:47.190 "admin_qpairs": 0, 00:22:47.190 "io_qpairs": 2, 00:22:47.190 "current_admin_qpairs": 0, 00:22:47.190 "current_io_qpairs": 2, 00:22:47.190 "pending_bdev_io": 0, 00:22:47.190 "completed_nvme_io": 28213, 00:22:47.190 "transports": [ 00:22:47.190 { 00:22:47.190 "trtype": "TCP" 00:22:47.190 } 00:22:47.190 ] 00:22:47.190 }, 00:22:47.190 { 00:22:47.190 "name": "nvmf_tgt_poll_group_002", 00:22:47.190 "admin_qpairs": 0, 00:22:47.190 "io_qpairs": 0, 00:22:47.190 "current_admin_qpairs": 0, 00:22:47.190 "current_io_qpairs": 0, 00:22:47.190 "pending_bdev_io": 0, 00:22:47.190 "completed_nvme_io": 0, 00:22:47.190 "transports": [ 00:22:47.190 { 00:22:47.190 "trtype": "TCP" 00:22:47.190 } 00:22:47.190 ] 00:22:47.190 }, 00:22:47.190 { 00:22:47.190 "name": "nvmf_tgt_poll_group_003", 00:22:47.190 "admin_qpairs": 0, 00:22:47.190 "io_qpairs": 0, 00:22:47.190 "current_admin_qpairs": 0, 00:22:47.190 "current_io_qpairs": 0, 00:22:47.190 "pending_bdev_io": 0, 00:22:47.190 "completed_nvme_io": 0, 00:22:47.190 "transports": [ 00:22:47.190 { 00:22:47.190 "trtype": "TCP" 00:22:47.190 } 00:22:47.190 ] 00:22:47.190 } 00:22:47.190 ] 00:22:47.190 }' 00:22:47.190 08:34:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:47.190 08:34:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:47.190 08:34:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:47.190 08:34:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:47.190 08:34:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 349442 00:22:55.303 Initializing NVMe Controllers 00:22:55.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:55.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:55.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:55.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:55.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:55.303 Initialization complete. Launching workers. 00:22:55.303 ======================================================== 00:22:55.303 Latency(us) 00:22:55.303 Device Information : IOPS MiB/s Average min max 00:22:55.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7886.80 30.81 8116.32 1554.55 52699.87 00:22:55.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7393.00 28.88 8657.00 1532.36 53122.64 00:22:55.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6870.00 26.84 9355.22 1545.85 56583.96 00:22:55.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7374.40 28.81 8709.10 1469.37 53410.68 00:22:55.303 ======================================================== 00:22:55.303 Total : 29524.19 115.33 8688.05 1469.37 56583.96 00:22:55.303 00:22:55.303 08:34:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:55.303 08:34:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:55.303 08:34:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:55.303 08:34:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:55.303 08:34:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:55.303 08:34:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:55.303 08:34:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:55.303 rmmod nvme_tcp 00:22:55.303 rmmod nvme_fabrics 00:22:55.303 rmmod nvme_keyring 00:22:55.303 08:34:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:55.303 08:34:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:55.303 08:34:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:55.303 08:34:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 349221 ']' 00:22:55.303 08:34:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 349221 00:22:55.304 08:34:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 349221 ']' 00:22:55.304 08:34:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 349221 00:22:55.304 08:34:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:22:55.304 08:34:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:55.304 08:34:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 349221 00:22:55.304 08:34:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:55.304 08:34:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:55.304 08:34:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 349221' 00:22:55.304 killing process with pid 349221 00:22:55.304 08:34:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 349221 00:22:55.304 [2024-05-15 08:34:41.913717] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:55.304 08:34:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 349221 00:22:55.304 08:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:55.304 08:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:55.304 08:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:55.304 08:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:55.304 08:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:55.304 08:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.304 08:34:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.304 08:34:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.594 08:34:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:58.594 08:34:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:58.594 00:22:58.594 real 0m51.883s 00:22:58.594 user 2m49.708s 00:22:58.594 sys 0m10.339s 00:22:58.594 08:34:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:58.594 08:34:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:58.594 ************************************ 00:22:58.594 END TEST nvmf_perf_adq 00:22:58.594 ************************************ 00:22:58.594 08:34:45 nvmf_tcp -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:58.595 08:34:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:58.595 08:34:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:58.595 08:34:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:58.595 ************************************ 00:22:58.595 START TEST nvmf_shutdown 00:22:58.595 ************************************ 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:58.595 * Looking for test storage... 00:22:58.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:58.595 ************************************ 00:22:58.595 START TEST nvmf_shutdown_tc1 00:22:58.595 ************************************ 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:58.595 08:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:03.867 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:03.867 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:03.868 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:03.868 Found net devices under 0000:86:00.0: cvl_0_0 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:03.868 Found net devices under 0000:86:00.1: cvl_0_1 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:03.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:23:03.868 00:23:03.868 --- 10.0.0.2 ping statistics --- 00:23:03.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.868 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:03.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:23:03.868 00:23:03.868 --- 10.0.0.1 ping statistics --- 00:23:03.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.868 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=354694 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 354694 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 354694 ']' 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:03.868 08:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:03.868 [2024-05-15 08:34:50.770657] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:23:03.868 [2024-05-15 08:34:50.770702] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.868 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.868 [2024-05-15 08:34:50.829008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:04.128 [2024-05-15 08:34:50.911798] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.128 [2024-05-15 08:34:50.911829] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.128 [2024-05-15 08:34:50.911836] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.128 [2024-05-15 08:34:50.911842] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.128 [2024-05-15 08:34:50.911847] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.128 [2024-05-15 08:34:50.911891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.128 [2024-05-15 08:34:50.911992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:04.128 [2024-05-15 08:34:50.912097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.128 [2024-05-15 08:34:50.912099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:04.696 [2024-05-15 08:34:51.621988] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.696 08:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:04.696 Malloc1 00:23:04.696 [2024-05-15 08:34:51.717348] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:04.696 [2024-05-15 08:34:51.717579] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.956 Malloc2 00:23:04.956 Malloc3 00:23:04.956 Malloc4 00:23:04.956 Malloc5 00:23:04.956 Malloc6 00:23:04.956 Malloc7 00:23:05.217 Malloc8 00:23:05.217 Malloc9 00:23:05.217 Malloc10 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=354980 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 354980 /var/tmp/bdevperf.sock 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 354980 ']' 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.217 { 00:23:05.217 "params": { 00:23:05.217 "name": "Nvme$subsystem", 00:23:05.217 "trtype": "$TEST_TRANSPORT", 00:23:05.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.217 "adrfam": "ipv4", 00:23:05.217 "trsvcid": "$NVMF_PORT", 00:23:05.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.217 "hdgst": ${hdgst:-false}, 00:23:05.217 "ddgst": ${ddgst:-false} 00:23:05.217 }, 00:23:05.217 "method": "bdev_nvme_attach_controller" 00:23:05.217 } 00:23:05.217 EOF 00:23:05.217 )") 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.217 { 00:23:05.217 "params": { 00:23:05.217 "name": "Nvme$subsystem", 00:23:05.217 "trtype": "$TEST_TRANSPORT", 00:23:05.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.217 "adrfam": "ipv4", 00:23:05.217 "trsvcid": "$NVMF_PORT", 00:23:05.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.217 "hdgst": ${hdgst:-false}, 00:23:05.217 "ddgst": ${ddgst:-false} 00:23:05.217 }, 00:23:05.217 "method": "bdev_nvme_attach_controller" 00:23:05.217 } 00:23:05.217 EOF 00:23:05.217 )") 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.217 { 00:23:05.217 "params": { 00:23:05.217 "name": "Nvme$subsystem", 00:23:05.217 "trtype": "$TEST_TRANSPORT", 00:23:05.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.217 "adrfam": "ipv4", 00:23:05.217 "trsvcid": "$NVMF_PORT", 00:23:05.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.217 "hdgst": ${hdgst:-false}, 00:23:05.217 "ddgst": ${ddgst:-false} 00:23:05.217 }, 00:23:05.217 "method": "bdev_nvme_attach_controller" 00:23:05.217 } 00:23:05.217 EOF 00:23:05.217 )") 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.217 { 00:23:05.217 "params": { 00:23:05.217 "name": "Nvme$subsystem", 00:23:05.217 "trtype": "$TEST_TRANSPORT", 00:23:05.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.217 "adrfam": "ipv4", 00:23:05.217 "trsvcid": "$NVMF_PORT", 00:23:05.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.217 "hdgst": ${hdgst:-false}, 00:23:05.217 "ddgst": ${ddgst:-false} 00:23:05.217 }, 00:23:05.217 "method": "bdev_nvme_attach_controller" 00:23:05.217 } 00:23:05.217 EOF 00:23:05.217 )") 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.217 { 00:23:05.217 "params": { 00:23:05.217 "name": "Nvme$subsystem", 00:23:05.217 "trtype": "$TEST_TRANSPORT", 00:23:05.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.217 "adrfam": "ipv4", 00:23:05.217 "trsvcid": "$NVMF_PORT", 00:23:05.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.217 "hdgst": ${hdgst:-false}, 00:23:05.217 "ddgst": ${ddgst:-false} 00:23:05.217 }, 00:23:05.217 "method": "bdev_nvme_attach_controller" 00:23:05.217 } 00:23:05.217 EOF 00:23:05.217 )") 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.217 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.217 { 00:23:05.217 "params": { 00:23:05.217 "name": "Nvme$subsystem", 00:23:05.217 "trtype": "$TEST_TRANSPORT", 00:23:05.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.217 "adrfam": "ipv4", 00:23:05.217 "trsvcid": "$NVMF_PORT", 00:23:05.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.217 "hdgst": ${hdgst:-false}, 00:23:05.217 "ddgst": ${ddgst:-false} 00:23:05.217 }, 00:23:05.217 "method": "bdev_nvme_attach_controller" 00:23:05.217 } 00:23:05.217 EOF 00:23:05.217 )") 00:23:05.218 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:05.218 [2024-05-15 08:34:52.190602] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:23:05.218 [2024-05-15 08:34:52.190654] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:05.218 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.218 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.218 { 00:23:05.218 "params": { 00:23:05.218 "name": "Nvme$subsystem", 00:23:05.218 "trtype": "$TEST_TRANSPORT", 00:23:05.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.218 "adrfam": "ipv4", 00:23:05.218 "trsvcid": "$NVMF_PORT", 00:23:05.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.218 "hdgst": ${hdgst:-false}, 00:23:05.218 "ddgst": ${ddgst:-false} 00:23:05.218 }, 00:23:05.218 "method": "bdev_nvme_attach_controller" 00:23:05.218 } 00:23:05.218 EOF 00:23:05.218 )") 00:23:05.218 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:05.218 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.218 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.218 { 00:23:05.218 "params": { 00:23:05.218 "name": "Nvme$subsystem", 00:23:05.218 "trtype": "$TEST_TRANSPORT", 00:23:05.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.218 "adrfam": "ipv4", 00:23:05.218 "trsvcid": "$NVMF_PORT", 00:23:05.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.218 "hdgst": ${hdgst:-false}, 00:23:05.218 "ddgst": ${ddgst:-false} 00:23:05.218 }, 00:23:05.218 "method": "bdev_nvme_attach_controller" 00:23:05.218 } 00:23:05.218 EOF 00:23:05.218 )") 00:23:05.218 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:05.218 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.218 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.218 { 00:23:05.218 "params": { 00:23:05.218 "name": "Nvme$subsystem", 00:23:05.218 "trtype": "$TEST_TRANSPORT", 00:23:05.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.218 "adrfam": "ipv4", 00:23:05.218 "trsvcid": "$NVMF_PORT", 00:23:05.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.218 "hdgst": ${hdgst:-false}, 00:23:05.218 "ddgst": ${ddgst:-false} 00:23:05.218 }, 00:23:05.218 "method": "bdev_nvme_attach_controller" 00:23:05.218 } 00:23:05.218 EOF 00:23:05.218 )") 00:23:05.218 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:05.218 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.218 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.218 { 00:23:05.218 "params": { 00:23:05.218 "name": "Nvme$subsystem", 00:23:05.218 "trtype": "$TEST_TRANSPORT", 00:23:05.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.218 "adrfam": "ipv4", 00:23:05.218 "trsvcid": "$NVMF_PORT", 00:23:05.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.218 "hdgst": ${hdgst:-false}, 00:23:05.218 "ddgst": ${ddgst:-false} 00:23:05.218 }, 00:23:05.218 "method": "bdev_nvme_attach_controller" 00:23:05.218 } 00:23:05.218 EOF 00:23:05.218 )") 00:23:05.218 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:05.218 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.218 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:05.218 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:05.218 08:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:05.218 "params": { 00:23:05.218 "name": "Nvme1", 00:23:05.218 "trtype": "tcp", 00:23:05.218 "traddr": "10.0.0.2", 00:23:05.218 "adrfam": "ipv4", 00:23:05.218 "trsvcid": "4420", 00:23:05.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:05.218 "hdgst": false, 00:23:05.218 "ddgst": false 00:23:05.218 }, 00:23:05.218 "method": "bdev_nvme_attach_controller" 00:23:05.218 },{ 00:23:05.218 "params": { 00:23:05.218 "name": "Nvme2", 00:23:05.218 "trtype": "tcp", 00:23:05.218 "traddr": "10.0.0.2", 00:23:05.218 "adrfam": "ipv4", 00:23:05.218 "trsvcid": "4420", 00:23:05.218 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:05.218 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:05.218 "hdgst": false, 00:23:05.218 "ddgst": false 00:23:05.218 }, 00:23:05.218 "method": "bdev_nvme_attach_controller" 00:23:05.218 },{ 00:23:05.218 "params": { 00:23:05.218 "name": "Nvme3", 00:23:05.218 "trtype": "tcp", 00:23:05.218 "traddr": "10.0.0.2", 00:23:05.218 "adrfam": "ipv4", 00:23:05.218 "trsvcid": "4420", 00:23:05.218 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:05.218 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:05.218 "hdgst": false, 00:23:05.218 "ddgst": false 00:23:05.218 }, 00:23:05.218 "method": "bdev_nvme_attach_controller" 00:23:05.218 },{ 00:23:05.218 "params": { 00:23:05.218 "name": "Nvme4", 00:23:05.218 "trtype": "tcp", 00:23:05.218 "traddr": "10.0.0.2", 00:23:05.218 "adrfam": "ipv4", 00:23:05.218 "trsvcid": "4420", 00:23:05.218 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:05.218 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:05.218 "hdgst": false, 00:23:05.218 "ddgst": false 00:23:05.218 }, 00:23:05.218 "method": "bdev_nvme_attach_controller" 00:23:05.218 },{ 00:23:05.218 "params": { 00:23:05.218 "name": "Nvme5", 00:23:05.218 "trtype": "tcp", 00:23:05.218 "traddr": "10.0.0.2", 00:23:05.218 "adrfam": "ipv4", 00:23:05.218 "trsvcid": "4420", 00:23:05.218 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:05.218 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:05.218 "hdgst": false, 00:23:05.218 "ddgst": false 00:23:05.218 }, 00:23:05.218 "method": "bdev_nvme_attach_controller" 00:23:05.218 },{ 00:23:05.218 "params": { 00:23:05.218 "name": "Nvme6", 00:23:05.218 "trtype": "tcp", 00:23:05.218 "traddr": "10.0.0.2", 00:23:05.218 "adrfam": "ipv4", 00:23:05.218 "trsvcid": "4420", 00:23:05.218 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:05.218 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:05.218 "hdgst": false, 00:23:05.218 "ddgst": false 00:23:05.218 }, 00:23:05.218 "method": "bdev_nvme_attach_controller" 00:23:05.218 },{ 00:23:05.218 "params": { 00:23:05.218 "name": "Nvme7", 00:23:05.218 "trtype": "tcp", 00:23:05.218 "traddr": "10.0.0.2", 00:23:05.218 "adrfam": "ipv4", 00:23:05.218 "trsvcid": "4420", 00:23:05.218 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:05.218 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:05.218 "hdgst": false, 00:23:05.218 "ddgst": false 00:23:05.218 }, 00:23:05.218 "method": "bdev_nvme_attach_controller" 00:23:05.218 },{ 00:23:05.218 "params": { 00:23:05.218 "name": "Nvme8", 00:23:05.218 "trtype": "tcp", 00:23:05.218 "traddr": "10.0.0.2", 00:23:05.218 "adrfam": "ipv4", 00:23:05.218 "trsvcid": "4420", 00:23:05.218 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:05.218 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:05.218 "hdgst": false, 00:23:05.218 "ddgst": false 00:23:05.218 }, 00:23:05.218 "method": "bdev_nvme_attach_controller" 00:23:05.218 },{ 00:23:05.218 "params": { 00:23:05.218 "name": "Nvme9", 00:23:05.218 "trtype": "tcp", 00:23:05.218 "traddr": "10.0.0.2", 00:23:05.218 "adrfam": "ipv4", 00:23:05.218 "trsvcid": "4420", 00:23:05.218 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:05.218 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:05.218 "hdgst": false, 00:23:05.218 "ddgst": false 00:23:05.219 }, 00:23:05.219 "method": "bdev_nvme_attach_controller" 00:23:05.219 },{ 00:23:05.219 "params": { 00:23:05.219 "name": "Nvme10", 00:23:05.219 "trtype": "tcp", 00:23:05.219 "traddr": "10.0.0.2", 00:23:05.219 "adrfam": "ipv4", 00:23:05.219 "trsvcid": "4420", 00:23:05.219 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:05.219 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:05.219 "hdgst": false, 00:23:05.219 "ddgst": false 00:23:05.219 }, 00:23:05.219 "method": "bdev_nvme_attach_controller" 00:23:05.219 }' 00:23:05.479 [2024-05-15 08:34:52.247542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.479 [2024-05-15 08:34:52.320140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.858 08:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:06.858 08:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:23:06.858 08:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:06.858 08:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.858 08:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:06.858 08:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.858 08:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 354980 00:23:06.858 08:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:06.859 08:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:07.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 354980 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 354694 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.798 { 00:23:07.798 "params": { 00:23:07.798 "name": "Nvme$subsystem", 00:23:07.798 "trtype": "$TEST_TRANSPORT", 00:23:07.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.798 "adrfam": "ipv4", 00:23:07.798 "trsvcid": "$NVMF_PORT", 00:23:07.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.798 "hdgst": ${hdgst:-false}, 00:23:07.798 "ddgst": ${ddgst:-false} 00:23:07.798 }, 00:23:07.798 "method": "bdev_nvme_attach_controller" 00:23:07.798 } 00:23:07.798 EOF 00:23:07.798 )") 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.798 { 00:23:07.798 "params": { 00:23:07.798 "name": "Nvme$subsystem", 00:23:07.798 "trtype": "$TEST_TRANSPORT", 00:23:07.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.798 "adrfam": "ipv4", 00:23:07.798 "trsvcid": "$NVMF_PORT", 00:23:07.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.798 "hdgst": ${hdgst:-false}, 00:23:07.798 "ddgst": ${ddgst:-false} 00:23:07.798 }, 00:23:07.798 "method": "bdev_nvme_attach_controller" 00:23:07.798 } 00:23:07.798 EOF 00:23:07.798 )") 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.798 { 00:23:07.798 "params": { 00:23:07.798 "name": "Nvme$subsystem", 00:23:07.798 "trtype": "$TEST_TRANSPORT", 00:23:07.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.798 "adrfam": "ipv4", 00:23:07.798 "trsvcid": "$NVMF_PORT", 00:23:07.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.798 "hdgst": ${hdgst:-false}, 00:23:07.798 "ddgst": ${ddgst:-false} 00:23:07.798 }, 00:23:07.798 "method": "bdev_nvme_attach_controller" 00:23:07.798 } 00:23:07.798 EOF 00:23:07.798 )") 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.798 { 00:23:07.798 "params": { 00:23:07.798 "name": "Nvme$subsystem", 00:23:07.798 "trtype": "$TEST_TRANSPORT", 00:23:07.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.798 "adrfam": "ipv4", 00:23:07.798 "trsvcid": "$NVMF_PORT", 00:23:07.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.798 "hdgst": ${hdgst:-false}, 00:23:07.798 "ddgst": ${ddgst:-false} 00:23:07.798 }, 00:23:07.798 "method": "bdev_nvme_attach_controller" 00:23:07.798 } 00:23:07.798 EOF 00:23:07.798 )") 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.798 { 00:23:07.798 "params": { 00:23:07.798 "name": "Nvme$subsystem", 00:23:07.798 "trtype": "$TEST_TRANSPORT", 00:23:07.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.798 "adrfam": "ipv4", 00:23:07.798 "trsvcid": "$NVMF_PORT", 00:23:07.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.798 "hdgst": ${hdgst:-false}, 00:23:07.798 "ddgst": ${ddgst:-false} 00:23:07.798 }, 00:23:07.798 "method": "bdev_nvme_attach_controller" 00:23:07.798 } 00:23:07.798 EOF 00:23:07.798 )") 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.798 { 00:23:07.798 "params": { 00:23:07.798 "name": "Nvme$subsystem", 00:23:07.798 "trtype": "$TEST_TRANSPORT", 00:23:07.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.798 "adrfam": "ipv4", 00:23:07.798 "trsvcid": "$NVMF_PORT", 00:23:07.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.798 "hdgst": ${hdgst:-false}, 00:23:07.798 "ddgst": ${ddgst:-false} 00:23:07.798 }, 00:23:07.798 "method": "bdev_nvme_attach_controller" 00:23:07.798 } 00:23:07.798 EOF 00:23:07.798 )") 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:07.798 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.799 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.799 { 00:23:07.799 "params": { 00:23:07.799 "name": "Nvme$subsystem", 00:23:07.799 "trtype": "$TEST_TRANSPORT", 00:23:07.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.799 "adrfam": "ipv4", 00:23:07.799 "trsvcid": "$NVMF_PORT", 00:23:07.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.799 "hdgst": ${hdgst:-false}, 00:23:07.799 "ddgst": ${ddgst:-false} 00:23:07.799 }, 00:23:07.799 "method": "bdev_nvme_attach_controller" 00:23:07.799 } 00:23:07.799 EOF 00:23:07.799 )") 00:23:07.799 [2024-05-15 08:34:54.728944] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:23:07.799 [2024-05-15 08:34:54.728993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355457 ] 00:23:07.799 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:07.799 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.799 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.799 { 00:23:07.799 "params": { 00:23:07.799 "name": "Nvme$subsystem", 00:23:07.799 "trtype": "$TEST_TRANSPORT", 00:23:07.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.799 "adrfam": "ipv4", 00:23:07.799 "trsvcid": "$NVMF_PORT", 00:23:07.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.799 "hdgst": ${hdgst:-false}, 00:23:07.799 "ddgst": ${ddgst:-false} 00:23:07.799 }, 00:23:07.799 "method": "bdev_nvme_attach_controller" 00:23:07.799 } 00:23:07.799 EOF 00:23:07.799 )") 00:23:07.799 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:07.799 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.799 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.799 { 00:23:07.799 "params": { 00:23:07.799 "name": "Nvme$subsystem", 00:23:07.799 "trtype": "$TEST_TRANSPORT", 00:23:07.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.799 "adrfam": "ipv4", 00:23:07.799 "trsvcid": "$NVMF_PORT", 00:23:07.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.799 "hdgst": ${hdgst:-false}, 00:23:07.799 "ddgst": ${ddgst:-false} 00:23:07.799 }, 00:23:07.799 "method": "bdev_nvme_attach_controller" 00:23:07.799 } 00:23:07.799 EOF 00:23:07.799 )") 00:23:07.799 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:07.799 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.799 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.799 { 00:23:07.799 "params": { 00:23:07.799 "name": "Nvme$subsystem", 00:23:07.799 "trtype": "$TEST_TRANSPORT", 00:23:07.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.799 "adrfam": "ipv4", 00:23:07.799 "trsvcid": "$NVMF_PORT", 00:23:07.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.799 "hdgst": ${hdgst:-false}, 00:23:07.799 "ddgst": ${ddgst:-false} 00:23:07.799 }, 00:23:07.799 "method": "bdev_nvme_attach_controller" 00:23:07.799 } 00:23:07.799 EOF 00:23:07.799 )") 00:23:07.799 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:07.799 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:07.799 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.799 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:07.799 08:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:07.799 "params": { 00:23:07.799 "name": "Nvme1", 00:23:07.799 "trtype": "tcp", 00:23:07.799 "traddr": "10.0.0.2", 00:23:07.799 "adrfam": "ipv4", 00:23:07.799 "trsvcid": "4420", 00:23:07.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:07.799 "hdgst": false, 00:23:07.799 "ddgst": false 00:23:07.799 }, 00:23:07.799 "method": "bdev_nvme_attach_controller" 00:23:07.799 },{ 00:23:07.799 "params": { 00:23:07.799 "name": "Nvme2", 00:23:07.799 "trtype": "tcp", 00:23:07.799 "traddr": "10.0.0.2", 00:23:07.799 "adrfam": "ipv4", 00:23:07.799 "trsvcid": "4420", 00:23:07.799 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:07.799 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:07.799 "hdgst": false, 00:23:07.799 "ddgst": false 00:23:07.799 }, 00:23:07.799 "method": "bdev_nvme_attach_controller" 00:23:07.799 },{ 00:23:07.799 "params": { 00:23:07.799 "name": "Nvme3", 00:23:07.799 "trtype": "tcp", 00:23:07.799 "traddr": "10.0.0.2", 00:23:07.799 "adrfam": "ipv4", 00:23:07.799 "trsvcid": "4420", 00:23:07.799 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:07.799 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:07.799 "hdgst": false, 00:23:07.799 "ddgst": false 00:23:07.799 }, 00:23:07.799 "method": "bdev_nvme_attach_controller" 00:23:07.799 },{ 00:23:07.799 "params": { 00:23:07.799 "name": "Nvme4", 00:23:07.799 "trtype": "tcp", 00:23:07.799 "traddr": "10.0.0.2", 00:23:07.799 "adrfam": "ipv4", 00:23:07.799 "trsvcid": "4420", 00:23:07.799 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:07.799 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:07.799 "hdgst": false, 00:23:07.799 "ddgst": false 00:23:07.799 }, 00:23:07.799 "method": "bdev_nvme_attach_controller" 00:23:07.799 },{ 00:23:07.799 "params": { 00:23:07.799 "name": "Nvme5", 00:23:07.799 "trtype": "tcp", 00:23:07.799 "traddr": "10.0.0.2", 00:23:07.799 "adrfam": "ipv4", 00:23:07.799 "trsvcid": "4420", 00:23:07.799 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:07.799 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:07.799 "hdgst": false, 00:23:07.799 "ddgst": false 00:23:07.799 }, 00:23:07.799 "method": "bdev_nvme_attach_controller" 00:23:07.799 },{ 00:23:07.799 "params": { 00:23:07.799 "name": "Nvme6", 00:23:07.799 "trtype": "tcp", 00:23:07.799 "traddr": "10.0.0.2", 00:23:07.799 "adrfam": "ipv4", 00:23:07.799 "trsvcid": "4420", 00:23:07.799 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:07.799 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:07.799 "hdgst": false, 00:23:07.799 "ddgst": false 00:23:07.799 }, 00:23:07.799 "method": "bdev_nvme_attach_controller" 00:23:07.799 },{ 00:23:07.799 "params": { 00:23:07.799 "name": "Nvme7", 00:23:07.799 "trtype": "tcp", 00:23:07.799 "traddr": "10.0.0.2", 00:23:07.799 "adrfam": "ipv4", 00:23:07.799 "trsvcid": "4420", 00:23:07.799 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:07.799 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:07.799 "hdgst": false, 00:23:07.799 "ddgst": false 00:23:07.799 }, 00:23:07.799 "method": "bdev_nvme_attach_controller" 00:23:07.799 },{ 00:23:07.799 "params": { 00:23:07.799 "name": "Nvme8", 00:23:07.799 "trtype": "tcp", 00:23:07.799 "traddr": "10.0.0.2", 00:23:07.799 "adrfam": "ipv4", 00:23:07.799 "trsvcid": "4420", 00:23:07.799 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:07.799 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:07.799 "hdgst": false, 00:23:07.799 "ddgst": false 00:23:07.799 }, 00:23:07.799 "method": "bdev_nvme_attach_controller" 00:23:07.799 },{ 00:23:07.799 "params": { 00:23:07.799 "name": "Nvme9", 00:23:07.799 "trtype": "tcp", 00:23:07.799 "traddr": "10.0.0.2", 00:23:07.799 "adrfam": "ipv4", 00:23:07.799 "trsvcid": "4420", 00:23:07.799 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:07.799 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:07.799 "hdgst": false, 00:23:07.799 "ddgst": false 00:23:07.799 }, 00:23:07.799 "method": "bdev_nvme_attach_controller" 00:23:07.799 },{ 00:23:07.799 "params": { 00:23:07.799 "name": "Nvme10", 00:23:07.799 "trtype": "tcp", 00:23:07.799 "traddr": "10.0.0.2", 00:23:07.799 "adrfam": "ipv4", 00:23:07.799 "trsvcid": "4420", 00:23:07.799 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:07.799 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:07.799 "hdgst": false, 00:23:07.799 "ddgst": false 00:23:07.800 }, 00:23:07.800 "method": "bdev_nvme_attach_controller" 00:23:07.800 }' 00:23:07.800 [2024-05-15 08:34:54.786447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.060 [2024-05-15 08:34:54.860382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.440 Running I/O for 1 seconds... 00:23:10.818 00:23:10.818 Latency(us) 00:23:10.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.818 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.818 Verification LBA range: start 0x0 length 0x400 00:23:10.818 Nvme1n1 : 1.02 250.58 15.66 0.00 0.00 253011.26 21199.47 211538.81 00:23:10.818 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.818 Verification LBA range: start 0x0 length 0x400 00:23:10.818 Nvme2n1 : 1.05 244.81 15.30 0.00 0.00 255066.16 16754.42 220656.86 00:23:10.818 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.818 Verification LBA range: start 0x0 length 0x400 00:23:10.818 Nvme3n1 : 1.13 283.32 17.71 0.00 0.00 217575.69 12879.25 217921.45 00:23:10.818 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.818 Verification LBA range: start 0x0 length 0x400 00:23:10.818 Nvme4n1 : 1.14 337.00 21.06 0.00 0.00 179427.65 15158.76 210627.01 00:23:10.818 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.818 Verification LBA range: start 0x0 length 0x400 00:23:10.818 Nvme5n1 : 1.12 285.55 17.85 0.00 0.00 209261.79 15728.64 215186.03 00:23:10.818 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.818 Verification LBA range: start 0x0 length 0x400 00:23:10.818 Nvme6n1 : 1.12 286.05 17.88 0.00 0.00 205901.65 23592.96 214274.23 00:23:10.818 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.818 Verification LBA range: start 0x0 length 0x400 00:23:10.818 Nvme7n1 : 1.13 287.60 17.98 0.00 0.00 201456.45 2849.39 209715.20 00:23:10.818 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.818 Verification LBA range: start 0x0 length 0x400 00:23:10.818 Nvme8n1 : 1.13 282.11 17.63 0.00 0.00 202655.65 15728.64 216097.84 00:23:10.818 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.818 Verification LBA range: start 0x0 length 0x400 00:23:10.818 Nvme9n1 : 1.18 271.85 16.99 0.00 0.00 200324.50 12423.35 217009.64 00:23:10.818 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.818 Verification LBA range: start 0x0 length 0x400 00:23:10.818 Nvme10n1 : 1.19 277.81 17.36 0.00 0.00 193342.65 1467.44 235245.75 00:23:10.818 =================================================================================================================== 00:23:10.818 Total : 2806.68 175.42 0.00 0.00 209356.56 1467.44 235245.75 00:23:10.818 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:10.818 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:10.818 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:10.818 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:10.818 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:10.818 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:10.818 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:10.818 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:10.818 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:10.818 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:10.818 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:10.818 rmmod nvme_tcp 00:23:10.818 rmmod nvme_fabrics 00:23:11.078 rmmod nvme_keyring 00:23:11.078 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:11.078 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:11.078 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:11.078 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 354694 ']' 00:23:11.078 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 354694 00:23:11.078 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 354694 ']' 00:23:11.078 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 354694 00:23:11.078 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:23:11.078 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:11.078 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 354694 00:23:11.078 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:11.078 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:11.078 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 354694' 00:23:11.078 killing process with pid 354694 00:23:11.078 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 354694 00:23:11.078 [2024-05-15 08:34:57.924646] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:11.078 08:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 354694 00:23:11.337 08:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:11.337 08:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:11.337 08:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:11.337 08:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:11.337 08:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:11.337 08:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.337 08:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.337 08:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:13.872 00:23:13.872 real 0m14.940s 00:23:13.872 user 0m35.130s 00:23:13.872 sys 0m5.260s 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:13.872 ************************************ 00:23:13.872 END TEST nvmf_shutdown_tc1 00:23:13.872 ************************************ 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:13.872 ************************************ 00:23:13.872 START TEST nvmf_shutdown_tc2 00:23:13.872 ************************************ 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:13.872 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:13.873 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:13.873 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:13.873 Found net devices under 0000:86:00.0: cvl_0_0 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:13.873 Found net devices under 0000:86:00.1: cvl_0_1 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:13.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:23:13.873 00:23:13.873 --- 10.0.0.2 ping statistics --- 00:23:13.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.873 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:13.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:23:13.873 00:23:13.873 --- 10.0.0.1 ping statistics --- 00:23:13.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.873 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=356549 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 356549 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:13.873 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 356549 ']' 00:23:13.874 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.874 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:13.874 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.874 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:13.874 08:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.874 [2024-05-15 08:35:00.819817] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:23:13.874 [2024-05-15 08:35:00.819857] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.874 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.874 [2024-05-15 08:35:00.877460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:14.133 [2024-05-15 08:35:00.955686] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.133 [2024-05-15 08:35:00.955726] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.133 [2024-05-15 08:35:00.955734] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.133 [2024-05-15 08:35:00.955740] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.133 [2024-05-15 08:35:00.955745] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.133 [2024-05-15 08:35:00.955860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.133 [2024-05-15 08:35:00.955966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:14.133 [2024-05-15 08:35:00.956073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.133 [2024-05-15 08:35:00.956074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:14.701 [2024-05-15 08:35:01.660979] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.701 08:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:14.960 Malloc1 00:23:14.960 [2024-05-15 08:35:01.756749] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:14.960 [2024-05-15 08:35:01.757006] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.960 Malloc2 00:23:14.960 Malloc3 00:23:14.960 Malloc4 00:23:14.960 Malloc5 00:23:14.960 Malloc6 00:23:15.220 Malloc7 00:23:15.220 Malloc8 00:23:15.220 Malloc9 00:23:15.220 Malloc10 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=356842 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 356842 /var/tmp/bdevperf.sock 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 356842 ']' 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.220 { 00:23:15.220 "params": { 00:23:15.220 "name": "Nvme$subsystem", 00:23:15.220 "trtype": "$TEST_TRANSPORT", 00:23:15.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.220 "adrfam": "ipv4", 00:23:15.220 "trsvcid": "$NVMF_PORT", 00:23:15.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.220 "hdgst": ${hdgst:-false}, 00:23:15.220 "ddgst": ${ddgst:-false} 00:23:15.220 }, 00:23:15.220 "method": "bdev_nvme_attach_controller" 00:23:15.220 } 00:23:15.220 EOF 00:23:15.220 )") 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.220 { 00:23:15.220 "params": { 00:23:15.220 "name": "Nvme$subsystem", 00:23:15.220 "trtype": "$TEST_TRANSPORT", 00:23:15.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.220 "adrfam": "ipv4", 00:23:15.220 "trsvcid": "$NVMF_PORT", 00:23:15.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.220 "hdgst": ${hdgst:-false}, 00:23:15.220 "ddgst": ${ddgst:-false} 00:23:15.220 }, 00:23:15.220 "method": "bdev_nvme_attach_controller" 00:23:15.220 } 00:23:15.220 EOF 00:23:15.220 )") 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.220 { 00:23:15.220 "params": { 00:23:15.220 "name": "Nvme$subsystem", 00:23:15.220 "trtype": "$TEST_TRANSPORT", 00:23:15.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.220 "adrfam": "ipv4", 00:23:15.220 "trsvcid": "$NVMF_PORT", 00:23:15.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.220 "hdgst": ${hdgst:-false}, 00:23:15.220 "ddgst": ${ddgst:-false} 00:23:15.220 }, 00:23:15.220 "method": "bdev_nvme_attach_controller" 00:23:15.220 } 00:23:15.220 EOF 00:23:15.220 )") 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.220 { 00:23:15.220 "params": { 00:23:15.220 "name": "Nvme$subsystem", 00:23:15.220 "trtype": "$TEST_TRANSPORT", 00:23:15.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.220 "adrfam": "ipv4", 00:23:15.220 "trsvcid": "$NVMF_PORT", 00:23:15.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.220 "hdgst": ${hdgst:-false}, 00:23:15.220 "ddgst": ${ddgst:-false} 00:23:15.220 }, 00:23:15.220 "method": "bdev_nvme_attach_controller" 00:23:15.220 } 00:23:15.220 EOF 00:23:15.220 )") 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.220 { 00:23:15.220 "params": { 00:23:15.220 "name": "Nvme$subsystem", 00:23:15.220 "trtype": "$TEST_TRANSPORT", 00:23:15.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.220 "adrfam": "ipv4", 00:23:15.220 "trsvcid": "$NVMF_PORT", 00:23:15.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.220 "hdgst": ${hdgst:-false}, 00:23:15.220 "ddgst": ${ddgst:-false} 00:23:15.220 }, 00:23:15.220 "method": "bdev_nvme_attach_controller" 00:23:15.220 } 00:23:15.220 EOF 00:23:15.220 )") 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.220 { 00:23:15.220 "params": { 00:23:15.220 "name": "Nvme$subsystem", 00:23:15.220 "trtype": "$TEST_TRANSPORT", 00:23:15.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.220 "adrfam": "ipv4", 00:23:15.220 "trsvcid": "$NVMF_PORT", 00:23:15.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.220 "hdgst": ${hdgst:-false}, 00:23:15.220 "ddgst": ${ddgst:-false} 00:23:15.220 }, 00:23:15.220 "method": "bdev_nvme_attach_controller" 00:23:15.220 } 00:23:15.220 EOF 00:23:15.220 )") 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.220 { 00:23:15.220 "params": { 00:23:15.220 "name": "Nvme$subsystem", 00:23:15.220 "trtype": "$TEST_TRANSPORT", 00:23:15.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.220 "adrfam": "ipv4", 00:23:15.220 "trsvcid": "$NVMF_PORT", 00:23:15.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.220 "hdgst": ${hdgst:-false}, 00:23:15.220 "ddgst": ${ddgst:-false} 00:23:15.220 }, 00:23:15.220 "method": "bdev_nvme_attach_controller" 00:23:15.220 } 00:23:15.220 EOF 00:23:15.220 )") 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:15.220 [2024-05-15 08:35:02.235542] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:23:15.220 [2024-05-15 08:35:02.235590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid356842 ] 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.220 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.220 { 00:23:15.220 "params": { 00:23:15.220 "name": "Nvme$subsystem", 00:23:15.220 "trtype": "$TEST_TRANSPORT", 00:23:15.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.220 "adrfam": "ipv4", 00:23:15.220 "trsvcid": "$NVMF_PORT", 00:23:15.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.221 "hdgst": ${hdgst:-false}, 00:23:15.221 "ddgst": ${ddgst:-false} 00:23:15.221 }, 00:23:15.221 "method": "bdev_nvme_attach_controller" 00:23:15.221 } 00:23:15.221 EOF 00:23:15.221 )") 00:23:15.221 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:15.480 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.480 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.480 { 00:23:15.480 "params": { 00:23:15.480 "name": "Nvme$subsystem", 00:23:15.480 "trtype": "$TEST_TRANSPORT", 00:23:15.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.480 "adrfam": "ipv4", 00:23:15.480 "trsvcid": "$NVMF_PORT", 00:23:15.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.480 "hdgst": ${hdgst:-false}, 00:23:15.480 "ddgst": ${ddgst:-false} 00:23:15.480 }, 00:23:15.480 "method": "bdev_nvme_attach_controller" 00:23:15.480 } 00:23:15.480 EOF 00:23:15.480 )") 00:23:15.480 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:15.480 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.480 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.480 { 00:23:15.480 "params": { 00:23:15.480 "name": "Nvme$subsystem", 00:23:15.480 "trtype": "$TEST_TRANSPORT", 00:23:15.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.480 "adrfam": "ipv4", 00:23:15.480 "trsvcid": "$NVMF_PORT", 00:23:15.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.480 "hdgst": ${hdgst:-false}, 00:23:15.480 "ddgst": ${ddgst:-false} 00:23:15.480 }, 00:23:15.480 "method": "bdev_nvme_attach_controller" 00:23:15.480 } 00:23:15.480 EOF 00:23:15.480 )") 00:23:15.480 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:15.480 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:15.480 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:15.480 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.480 08:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:15.480 "params": { 00:23:15.480 "name": "Nvme1", 00:23:15.480 "trtype": "tcp", 00:23:15.480 "traddr": "10.0.0.2", 00:23:15.480 "adrfam": "ipv4", 00:23:15.480 "trsvcid": "4420", 00:23:15.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.480 "hdgst": false, 00:23:15.480 "ddgst": false 00:23:15.480 }, 00:23:15.480 "method": "bdev_nvme_attach_controller" 00:23:15.480 },{ 00:23:15.480 "params": { 00:23:15.480 "name": "Nvme2", 00:23:15.480 "trtype": "tcp", 00:23:15.480 "traddr": "10.0.0.2", 00:23:15.480 "adrfam": "ipv4", 00:23:15.480 "trsvcid": "4420", 00:23:15.480 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:15.480 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:15.480 "hdgst": false, 00:23:15.480 "ddgst": false 00:23:15.480 }, 00:23:15.480 "method": "bdev_nvme_attach_controller" 00:23:15.480 },{ 00:23:15.480 "params": { 00:23:15.480 "name": "Nvme3", 00:23:15.480 "trtype": "tcp", 00:23:15.480 "traddr": "10.0.0.2", 00:23:15.480 "adrfam": "ipv4", 00:23:15.480 "trsvcid": "4420", 00:23:15.480 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:15.480 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:15.480 "hdgst": false, 00:23:15.480 "ddgst": false 00:23:15.480 }, 00:23:15.480 "method": "bdev_nvme_attach_controller" 00:23:15.480 },{ 00:23:15.480 "params": { 00:23:15.480 "name": "Nvme4", 00:23:15.480 "trtype": "tcp", 00:23:15.480 "traddr": "10.0.0.2", 00:23:15.480 "adrfam": "ipv4", 00:23:15.480 "trsvcid": "4420", 00:23:15.480 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:15.480 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:15.480 "hdgst": false, 00:23:15.480 "ddgst": false 00:23:15.480 }, 00:23:15.480 "method": "bdev_nvme_attach_controller" 00:23:15.480 },{ 00:23:15.480 "params": { 00:23:15.480 "name": "Nvme5", 00:23:15.480 "trtype": "tcp", 00:23:15.480 "traddr": "10.0.0.2", 00:23:15.480 "adrfam": "ipv4", 00:23:15.480 "trsvcid": "4420", 00:23:15.480 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:15.480 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:15.480 "hdgst": false, 00:23:15.480 "ddgst": false 00:23:15.480 }, 00:23:15.480 "method": "bdev_nvme_attach_controller" 00:23:15.480 },{ 00:23:15.480 "params": { 00:23:15.480 "name": "Nvme6", 00:23:15.480 "trtype": "tcp", 00:23:15.480 "traddr": "10.0.0.2", 00:23:15.480 "adrfam": "ipv4", 00:23:15.480 "trsvcid": "4420", 00:23:15.480 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:15.480 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:15.480 "hdgst": false, 00:23:15.480 "ddgst": false 00:23:15.480 }, 00:23:15.480 "method": "bdev_nvme_attach_controller" 00:23:15.480 },{ 00:23:15.480 "params": { 00:23:15.480 "name": "Nvme7", 00:23:15.480 "trtype": "tcp", 00:23:15.480 "traddr": "10.0.0.2", 00:23:15.480 "adrfam": "ipv4", 00:23:15.480 "trsvcid": "4420", 00:23:15.480 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:15.480 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:15.480 "hdgst": false, 00:23:15.480 "ddgst": false 00:23:15.480 }, 00:23:15.480 "method": "bdev_nvme_attach_controller" 00:23:15.480 },{ 00:23:15.480 "params": { 00:23:15.480 "name": "Nvme8", 00:23:15.480 "trtype": "tcp", 00:23:15.480 "traddr": "10.0.0.2", 00:23:15.480 "adrfam": "ipv4", 00:23:15.480 "trsvcid": "4420", 00:23:15.480 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:15.480 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:15.480 "hdgst": false, 00:23:15.480 "ddgst": false 00:23:15.480 }, 00:23:15.480 "method": "bdev_nvme_attach_controller" 00:23:15.480 },{ 00:23:15.480 "params": { 00:23:15.480 "name": "Nvme9", 00:23:15.480 "trtype": "tcp", 00:23:15.480 "traddr": "10.0.0.2", 00:23:15.480 "adrfam": "ipv4", 00:23:15.480 "trsvcid": "4420", 00:23:15.480 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:15.480 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:15.480 "hdgst": false, 00:23:15.480 "ddgst": false 00:23:15.480 }, 00:23:15.480 "method": "bdev_nvme_attach_controller" 00:23:15.480 },{ 00:23:15.480 "params": { 00:23:15.480 "name": "Nvme10", 00:23:15.480 "trtype": "tcp", 00:23:15.480 "traddr": "10.0.0.2", 00:23:15.480 "adrfam": "ipv4", 00:23:15.480 "trsvcid": "4420", 00:23:15.480 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:15.480 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:15.480 "hdgst": false, 00:23:15.480 "ddgst": false 00:23:15.480 }, 00:23:15.480 "method": "bdev_nvme_attach_controller" 00:23:15.480 }' 00:23:15.480 [2024-05-15 08:35:02.292318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.480 [2024-05-15 08:35:02.365054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.386 Running I/O for 10 seconds... 00:23:17.386 08:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:17.386 08:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:23:17.386 08:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:17.386 08:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.386 08:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.386 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:17.645 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.645 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:17.645 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:17.645 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=152 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 152 -ge 100 ']' 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 356842 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 356842 ']' 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 356842 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 356842 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 356842' 00:23:17.906 killing process with pid 356842 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 356842 00:23:17.906 08:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 356842 00:23:17.906 Received shutdown signal, test time was about 0.927846 seconds 00:23:17.906 00:23:17.906 Latency(us) 00:23:17.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.906 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.906 Verification LBA range: start 0x0 length 0x400 00:23:17.906 Nvme1n1 : 0.92 278.70 17.42 0.00 0.00 226785.95 16868.40 235245.75 00:23:17.906 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.906 Verification LBA range: start 0x0 length 0x400 00:23:17.906 Nvme2n1 : 0.92 278.96 17.44 0.00 0.00 223054.58 19603.81 248011.02 00:23:17.906 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.906 Verification LBA range: start 0x0 length 0x400 00:23:17.906 Nvme3n1 : 0.91 280.85 17.55 0.00 0.00 217440.28 11910.46 242540.19 00:23:17.906 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.906 Verification LBA range: start 0x0 length 0x400 00:23:17.906 Nvme4n1 : 0.90 283.09 17.69 0.00 0.00 211405.47 16640.45 235245.75 00:23:17.906 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.906 Verification LBA range: start 0x0 length 0x400 00:23:17.906 Nvme5n1 : 0.92 282.10 17.63 0.00 0.00 208556.92 2293.76 229774.91 00:23:17.906 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.906 Verification LBA range: start 0x0 length 0x400 00:23:17.906 Nvme6n1 : 0.93 276.11 17.26 0.00 0.00 209513.29 17894.18 235245.75 00:23:17.906 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.906 Verification LBA range: start 0x0 length 0x400 00:23:17.906 Nvme7n1 : 0.93 276.75 17.30 0.00 0.00 205079.60 30317.52 216097.84 00:23:17.906 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.906 Verification LBA range: start 0x0 length 0x400 00:23:17.906 Nvme8n1 : 0.89 216.53 13.53 0.00 0.00 255555.08 16070.57 237069.36 00:23:17.906 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.906 Verification LBA range: start 0x0 length 0x400 00:23:17.906 Nvme9n1 : 0.90 213.89 13.37 0.00 0.00 253987.62 36700.16 235245.75 00:23:17.906 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.906 Verification LBA range: start 0x0 length 0x400 00:23:17.906 Nvme10n1 : 0.90 213.20 13.32 0.00 0.00 249744.99 19945.74 253481.85 00:23:17.906 =================================================================================================================== 00:23:17.906 Total : 2600.17 162.51 0.00 0.00 223898.61 2293.76 253481.85 00:23:18.165 08:35:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:19.101 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 356549 00:23:19.101 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:19.101 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:19.101 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:19.101 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:19.101 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:19.101 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:19.101 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:19.101 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:19.101 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:19.101 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:19.101 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:19.101 rmmod nvme_tcp 00:23:19.359 rmmod nvme_fabrics 00:23:19.359 rmmod nvme_keyring 00:23:19.359 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:19.359 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:19.359 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:19.359 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 356549 ']' 00:23:19.359 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 356549 00:23:19.359 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 356549 ']' 00:23:19.359 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 356549 00:23:19.359 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:23:19.359 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:19.359 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 356549 00:23:19.359 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:19.359 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:19.359 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 356549' 00:23:19.359 killing process with pid 356549 00:23:19.359 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 356549 00:23:19.359 [2024-05-15 08:35:06.208180] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:19.359 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 356549 00:23:19.618 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:19.618 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:19.618 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:19.618 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:19.618 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:19.618 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.618 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.618 08:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:22.155 00:23:22.155 real 0m8.199s 00:23:22.155 user 0m25.317s 00:23:22.155 sys 0m1.306s 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:22.155 ************************************ 00:23:22.155 END TEST nvmf_shutdown_tc2 00:23:22.155 ************************************ 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:22.155 ************************************ 00:23:22.155 START TEST nvmf_shutdown_tc3 00:23:22.155 ************************************ 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:22.155 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:22.156 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:22.156 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:22.156 Found net devices under 0000:86:00.0: cvl_0_0 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:22.156 Found net devices under 0000:86:00.1: cvl_0_1 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:22.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:23:22.156 00:23:22.156 --- 10.0.0.2 ping statistics --- 00:23:22.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.156 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:22.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:23:22.156 00:23:22.156 --- 10.0.0.1 ping statistics --- 00:23:22.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.156 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:22.156 08:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:22.156 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:22.156 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:22.156 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:22.156 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:22.156 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=358040 00:23:22.156 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 358040 00:23:22.156 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 358040 ']' 00:23:22.156 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.156 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:22.156 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.156 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:22.156 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:22.156 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:22.156 [2024-05-15 08:35:09.079847] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:23:22.156 [2024-05-15 08:35:09.079896] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.156 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.156 [2024-05-15 08:35:09.138141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:22.416 [2024-05-15 08:35:09.218859] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.416 [2024-05-15 08:35:09.218891] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.416 [2024-05-15 08:35:09.218898] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.416 [2024-05-15 08:35:09.218904] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.416 [2024-05-15 08:35:09.218909] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.416 [2024-05-15 08:35:09.219004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.416 [2024-05-15 08:35:09.219022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:22.416 [2024-05-15 08:35:09.219135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.416 [2024-05-15 08:35:09.219137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:22.982 [2024-05-15 08:35:09.924050] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.982 08:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:22.982 Malloc1 00:23:23.243 [2024-05-15 08:35:10.015531] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:23.243 [2024-05-15 08:35:10.015787] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.243 Malloc2 00:23:23.243 Malloc3 00:23:23.243 Malloc4 00:23:23.243 Malloc5 00:23:23.243 Malloc6 00:23:23.243 Malloc7 00:23:23.502 Malloc8 00:23:23.502 Malloc9 00:23:23.502 Malloc10 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=358324 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 358324 /var/tmp/bdevperf.sock 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 358324 ']' 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.502 { 00:23:23.502 "params": { 00:23:23.502 "name": "Nvme$subsystem", 00:23:23.502 "trtype": "$TEST_TRANSPORT", 00:23:23.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.502 "adrfam": "ipv4", 00:23:23.502 "trsvcid": "$NVMF_PORT", 00:23:23.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.502 "hdgst": ${hdgst:-false}, 00:23:23.502 "ddgst": ${ddgst:-false} 00:23:23.502 }, 00:23:23.502 "method": "bdev_nvme_attach_controller" 00:23:23.502 } 00:23:23.502 EOF 00:23:23.502 )") 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.502 { 00:23:23.502 "params": { 00:23:23.502 "name": "Nvme$subsystem", 00:23:23.502 "trtype": "$TEST_TRANSPORT", 00:23:23.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.502 "adrfam": "ipv4", 00:23:23.502 "trsvcid": "$NVMF_PORT", 00:23:23.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.502 "hdgst": ${hdgst:-false}, 00:23:23.502 "ddgst": ${ddgst:-false} 00:23:23.502 }, 00:23:23.502 "method": "bdev_nvme_attach_controller" 00:23:23.502 } 00:23:23.502 EOF 00:23:23.502 )") 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.502 { 00:23:23.502 "params": { 00:23:23.502 "name": "Nvme$subsystem", 00:23:23.502 "trtype": "$TEST_TRANSPORT", 00:23:23.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.502 "adrfam": "ipv4", 00:23:23.502 "trsvcid": "$NVMF_PORT", 00:23:23.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.502 "hdgst": ${hdgst:-false}, 00:23:23.502 "ddgst": ${ddgst:-false} 00:23:23.502 }, 00:23:23.502 "method": "bdev_nvme_attach_controller" 00:23:23.502 } 00:23:23.502 EOF 00:23:23.502 )") 00:23:23.502 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.503 { 00:23:23.503 "params": { 00:23:23.503 "name": "Nvme$subsystem", 00:23:23.503 "trtype": "$TEST_TRANSPORT", 00:23:23.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.503 "adrfam": "ipv4", 00:23:23.503 "trsvcid": "$NVMF_PORT", 00:23:23.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.503 "hdgst": ${hdgst:-false}, 00:23:23.503 "ddgst": ${ddgst:-false} 00:23:23.503 }, 00:23:23.503 "method": "bdev_nvme_attach_controller" 00:23:23.503 } 00:23:23.503 EOF 00:23:23.503 )") 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.503 { 00:23:23.503 "params": { 00:23:23.503 "name": "Nvme$subsystem", 00:23:23.503 "trtype": "$TEST_TRANSPORT", 00:23:23.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.503 "adrfam": "ipv4", 00:23:23.503 "trsvcid": "$NVMF_PORT", 00:23:23.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.503 "hdgst": ${hdgst:-false}, 00:23:23.503 "ddgst": ${ddgst:-false} 00:23:23.503 }, 00:23:23.503 "method": "bdev_nvme_attach_controller" 00:23:23.503 } 00:23:23.503 EOF 00:23:23.503 )") 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.503 { 00:23:23.503 "params": { 00:23:23.503 "name": "Nvme$subsystem", 00:23:23.503 "trtype": "$TEST_TRANSPORT", 00:23:23.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.503 "adrfam": "ipv4", 00:23:23.503 "trsvcid": "$NVMF_PORT", 00:23:23.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.503 "hdgst": ${hdgst:-false}, 00:23:23.503 "ddgst": ${ddgst:-false} 00:23:23.503 }, 00:23:23.503 "method": "bdev_nvme_attach_controller" 00:23:23.503 } 00:23:23.503 EOF 00:23:23.503 )") 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.503 { 00:23:23.503 "params": { 00:23:23.503 "name": "Nvme$subsystem", 00:23:23.503 "trtype": "$TEST_TRANSPORT", 00:23:23.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.503 "adrfam": "ipv4", 00:23:23.503 "trsvcid": "$NVMF_PORT", 00:23:23.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.503 "hdgst": ${hdgst:-false}, 00:23:23.503 "ddgst": ${ddgst:-false} 00:23:23.503 }, 00:23:23.503 "method": "bdev_nvme_attach_controller" 00:23:23.503 } 00:23:23.503 EOF 00:23:23.503 )") 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:23.503 [2024-05-15 08:35:10.484004] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:23:23.503 [2024-05-15 08:35:10.484054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358324 ] 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.503 { 00:23:23.503 "params": { 00:23:23.503 "name": "Nvme$subsystem", 00:23:23.503 "trtype": "$TEST_TRANSPORT", 00:23:23.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.503 "adrfam": "ipv4", 00:23:23.503 "trsvcid": "$NVMF_PORT", 00:23:23.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.503 "hdgst": ${hdgst:-false}, 00:23:23.503 "ddgst": ${ddgst:-false} 00:23:23.503 }, 00:23:23.503 "method": "bdev_nvme_attach_controller" 00:23:23.503 } 00:23:23.503 EOF 00:23:23.503 )") 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.503 { 00:23:23.503 "params": { 00:23:23.503 "name": "Nvme$subsystem", 00:23:23.503 "trtype": "$TEST_TRANSPORT", 00:23:23.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.503 "adrfam": "ipv4", 00:23:23.503 "trsvcid": "$NVMF_PORT", 00:23:23.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.503 "hdgst": ${hdgst:-false}, 00:23:23.503 "ddgst": ${ddgst:-false} 00:23:23.503 }, 00:23:23.503 "method": "bdev_nvme_attach_controller" 00:23:23.503 } 00:23:23.503 EOF 00:23:23.503 )") 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.503 { 00:23:23.503 "params": { 00:23:23.503 "name": "Nvme$subsystem", 00:23:23.503 "trtype": "$TEST_TRANSPORT", 00:23:23.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.503 "adrfam": "ipv4", 00:23:23.503 "trsvcid": "$NVMF_PORT", 00:23:23.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.503 "hdgst": ${hdgst:-false}, 00:23:23.503 "ddgst": ${ddgst:-false} 00:23:23.503 }, 00:23:23.503 "method": "bdev_nvme_attach_controller" 00:23:23.503 } 00:23:23.503 EOF 00:23:23.503 )") 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:23.503 08:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:23.503 "params": { 00:23:23.503 "name": "Nvme1", 00:23:23.503 "trtype": "tcp", 00:23:23.503 "traddr": "10.0.0.2", 00:23:23.503 "adrfam": "ipv4", 00:23:23.503 "trsvcid": "4420", 00:23:23.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:23.503 "hdgst": false, 00:23:23.503 "ddgst": false 00:23:23.503 }, 00:23:23.503 "method": "bdev_nvme_attach_controller" 00:23:23.503 },{ 00:23:23.503 "params": { 00:23:23.503 "name": "Nvme2", 00:23:23.503 "trtype": "tcp", 00:23:23.503 "traddr": "10.0.0.2", 00:23:23.503 "adrfam": "ipv4", 00:23:23.503 "trsvcid": "4420", 00:23:23.503 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:23.503 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:23.503 "hdgst": false, 00:23:23.503 "ddgst": false 00:23:23.503 }, 00:23:23.503 "method": "bdev_nvme_attach_controller" 00:23:23.503 },{ 00:23:23.503 "params": { 00:23:23.503 "name": "Nvme3", 00:23:23.503 "trtype": "tcp", 00:23:23.503 "traddr": "10.0.0.2", 00:23:23.503 "adrfam": "ipv4", 00:23:23.503 "trsvcid": "4420", 00:23:23.503 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:23.503 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:23.503 "hdgst": false, 00:23:23.503 "ddgst": false 00:23:23.503 }, 00:23:23.503 "method": "bdev_nvme_attach_controller" 00:23:23.503 },{ 00:23:23.503 "params": { 00:23:23.503 "name": "Nvme4", 00:23:23.503 "trtype": "tcp", 00:23:23.503 "traddr": "10.0.0.2", 00:23:23.503 "adrfam": "ipv4", 00:23:23.503 "trsvcid": "4420", 00:23:23.503 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:23.503 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:23.503 "hdgst": false, 00:23:23.503 "ddgst": false 00:23:23.503 }, 00:23:23.503 "method": "bdev_nvme_attach_controller" 00:23:23.503 },{ 00:23:23.503 "params": { 00:23:23.503 "name": "Nvme5", 00:23:23.503 "trtype": "tcp", 00:23:23.503 "traddr": "10.0.0.2", 00:23:23.503 "adrfam": "ipv4", 00:23:23.503 "trsvcid": "4420", 00:23:23.503 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:23.503 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:23.503 "hdgst": false, 00:23:23.503 "ddgst": false 00:23:23.503 }, 00:23:23.503 "method": "bdev_nvme_attach_controller" 00:23:23.503 },{ 00:23:23.503 "params": { 00:23:23.503 "name": "Nvme6", 00:23:23.503 "trtype": "tcp", 00:23:23.503 "traddr": "10.0.0.2", 00:23:23.503 "adrfam": "ipv4", 00:23:23.503 "trsvcid": "4420", 00:23:23.503 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:23.503 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:23.503 "hdgst": false, 00:23:23.503 "ddgst": false 00:23:23.503 }, 00:23:23.503 "method": "bdev_nvme_attach_controller" 00:23:23.503 },{ 00:23:23.503 "params": { 00:23:23.503 "name": "Nvme7", 00:23:23.503 "trtype": "tcp", 00:23:23.503 "traddr": "10.0.0.2", 00:23:23.503 "adrfam": "ipv4", 00:23:23.503 "trsvcid": "4420", 00:23:23.503 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:23.503 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:23.503 "hdgst": false, 00:23:23.503 "ddgst": false 00:23:23.503 }, 00:23:23.503 "method": "bdev_nvme_attach_controller" 00:23:23.503 },{ 00:23:23.503 "params": { 00:23:23.503 "name": "Nvme8", 00:23:23.503 "trtype": "tcp", 00:23:23.503 "traddr": "10.0.0.2", 00:23:23.503 "adrfam": "ipv4", 00:23:23.504 "trsvcid": "4420", 00:23:23.504 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:23.504 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:23.504 "hdgst": false, 00:23:23.504 "ddgst": false 00:23:23.504 }, 00:23:23.504 "method": "bdev_nvme_attach_controller" 00:23:23.504 },{ 00:23:23.504 "params": { 00:23:23.504 "name": "Nvme9", 00:23:23.504 "trtype": "tcp", 00:23:23.504 "traddr": "10.0.0.2", 00:23:23.504 "adrfam": "ipv4", 00:23:23.504 "trsvcid": "4420", 00:23:23.504 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:23.504 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:23.504 "hdgst": false, 00:23:23.504 "ddgst": false 00:23:23.504 }, 00:23:23.504 "method": "bdev_nvme_attach_controller" 00:23:23.504 },{ 00:23:23.504 "params": { 00:23:23.504 "name": "Nvme10", 00:23:23.504 "trtype": "tcp", 00:23:23.504 "traddr": "10.0.0.2", 00:23:23.504 "adrfam": "ipv4", 00:23:23.504 "trsvcid": "4420", 00:23:23.504 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:23.504 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:23.504 "hdgst": false, 00:23:23.504 "ddgst": false 00:23:23.504 }, 00:23:23.504 "method": "bdev_nvme_attach_controller" 00:23:23.504 }' 00:23:23.504 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.763 [2024-05-15 08:35:10.539905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.763 [2024-05-15 08:35:10.616317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.139 Running I/O for 10 seconds... 00:23:25.139 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:25.139 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:23:25.139 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:25.139 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.139 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:25.398 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.398 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:25.398 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:25.398 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:25.398 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:25.398 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:25.398 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:25.398 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:25.398 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:25.398 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:25.398 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.398 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:25.398 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:25.398 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.398 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:25.398 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:25.398 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:25.657 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:25.657 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:25.657 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:25.657 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:25.657 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.657 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:25.657 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.657 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:25.657 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:25.657 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 358040 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 358040 ']' 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 358040 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 358040 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:25.922 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 358040' 00:23:25.922 killing process with pid 358040 00:23:25.923 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 358040 00:23:25.923 [2024-05-15 08:35:12.914523] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:25.923 08:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 358040 00:23:25.923 [2024-05-15 08:35:12.915952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.915991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.923 [2024-05-15 08:35:12.916534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.923 [2024-05-15 08:35:12.916540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.916956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.924 [2024-05-15 08:35:12.916962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.924 [2024-05-15 08:35:12.917024] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19927a0 was disconnected and freed. reset controller. 00:23:25.924 [2024-05-15 08:35:12.918761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:25.924 [2024-05-15 08:35:12.918817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ee8a0 (9): Bad file descriptor 00:23:25.924 [2024-05-15 08:35:12.919297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.924 [2024-05-15 08:35:12.919324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.924 [2024-05-15 08:35:12.919331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.924 [2024-05-15 08:35:12.919340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.924 [2024-05-15 08:35:12.919352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.924 [2024-05-15 08:35:12.919358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.924 [2024-05-15 08:35:12.919364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.924 [2024-05-15 08:35:12.919370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.924 [2024-05-15 08:35:12.919376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.924 [2024-05-15 08:35:12.919382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.924 [2024-05-15 08:35:12.919387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.924 [2024-05-15 08:35:12.919394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.924 [2024-05-15 08:35:12.919400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.924 [2024-05-15 08:35:12.919406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.919701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea5e0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.920135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.925 [2024-05-15 08:35:12.920222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.925 [2024-05-15 08:35:12.920233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ee8a0 with addr=10.0.0.2, port=4420 00:23:25.925 [2024-05-15 08:35:12.920242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ee8a0 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.925 [2024-05-15 08:35:12.921254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aa40 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.921882] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ee8a0 (9): Bad file descriptor 00:23:25.926 [2024-05-15 08:35:12.921933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.926 [2024-05-15 08:35:12.921945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.926 [2024-05-15 08:35:12.921953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.926 [2024-05-15 08:35:12.921960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.926 [2024-05-15 08:35:12.921967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.926 [2024-05-15 08:35:12.921973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.926 [2024-05-15 08:35:12.921980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.926 [2024-05-15 08:35:12.921987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.926 [2024-05-15 08:35:12.921993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a3980 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.922032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.926 [2024-05-15 08:35:12.922040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.926 [2024-05-15 08:35:12.922052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.926 [2024-05-15 08:35:12.922059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.926 [2024-05-15 08:35:12.922066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.926 [2024-05-15 08:35:12.922073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.926 [2024-05-15 08:35:12.922079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.926 [2024-05-15 08:35:12.922086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.926 [2024-05-15 08:35:12.922092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ec730 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.922114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.926 [2024-05-15 08:35:12.922123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.926 [2024-05-15 08:35:12.922130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.926 [2024-05-15 08:35:12.922137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.926 [2024-05-15 08:35:12.922144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.926 [2024-05-15 08:35:12.922150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.926 [2024-05-15 08:35:12.922158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.926 [2024-05-15 08:35:12.922171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.926 [2024-05-15 08:35:12.922178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819a30 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.922192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.922221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.922228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.922235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.922241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.922247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.926 [2024-05-15 08:35:12.922253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922530] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922565] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error [2024-05-15 08:35:12.922572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with state 00:23:25.927 the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922583] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:25.927 [2024-05-15 08:35:12.922588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922592] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:25.927 [2024-05-15 08:35:12.922594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.922604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aee0 is same with the state(5) to be set 00:23:25.927 [2024-05-15 08:35:12.923154] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.927 [2024-05-15 08:35:12.923788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.923995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b380 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.924195] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:25.928 [2024-05-15 08:35:12.924261] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:25.928 [2024-05-15 08:35:12.924747] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:25.928 [2024-05-15 08:35:12.925413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.925426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.925433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.925439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.925447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.925456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.925462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.925469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.925475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.925481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.925487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.925492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.925499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.925505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.925514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.925524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.928 [2024-05-15 08:35:12.925531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.925820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9360 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.926804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9800 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.926824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9800 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.926832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9800 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.926885] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:25.929 [2024-05-15 08:35:12.927026] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17e86f0 was disconnected and freed. reset controller. 00:23:25.929 [2024-05-15 08:35:12.927495] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:25.929 [2024-05-15 08:35:12.927536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f3610 (9): Bad file descriptor 00:23:25.929 [2024-05-15 08:35:12.927557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.927581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.927589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.927595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.927602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.927608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.927615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.927621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.927627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.927627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-05-15 08:35:12.927633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.927639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 08:35:12.927640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.929 the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.927649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.927655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.927655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-05-15 08:35:12.927661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.927665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.929 [2024-05-15 08:35:12.927668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.927675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.927675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-05-15 08:35:12.927684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.929 [2024-05-15 08:35:12.927690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.929 [2024-05-15 08:35:12.927693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-05-15 08:35:12.927698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with [2024-05-15 08:35:12.927701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:25.930 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.930 [2024-05-15 08:35:12.927710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927712] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e9bf0 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927767] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17e9bf0 was disconnected and freed. reset controller. 00:23:25.930 [2024-05-15 08:35:12.927773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.927884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.928635] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:25.930 [2024-05-15 08:35:12.928673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b54c0 (9): Bad file descriptor 00:23:25.930 [2024-05-15 08:35:12.928900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.930 [2024-05-15 08:35:12.928981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.930 [2024-05-15 08:35:12.928993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f3610 with addr=10.0.0.2, port=4420 00:23:25.930 [2024-05-15 08:35:12.929000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3610 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.929058] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:25.930 [2024-05-15 08:35:12.929135] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f3610 (9): Bad file descriptor 00:23:25.930 [2024-05-15 08:35:12.929528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.930 [2024-05-15 08:35:12.929626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.930 [2024-05-15 08:35:12.929636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b54c0 with addr=10.0.0.2, port=4420 00:23:25.930 [2024-05-15 08:35:12.929644] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b54c0 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.929651] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:25.930 [2024-05-15 08:35:12.929658] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:25.930 [2024-05-15 08:35:12.929665] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:25.930 [2024-05-15 08:35:12.929753] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.930 [2024-05-15 08:35:12.929769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b54c0 (9): Bad file descriptor 00:23:25.930 [2024-05-15 08:35:12.929839] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:25.930 [2024-05-15 08:35:12.929855] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:25.930 [2024-05-15 08:35:12.929861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:25.930 [2024-05-15 08:35:12.929868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:25.930 [2024-05-15 08:35:12.930191] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.930 [2024-05-15 08:35:12.930330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.930 [2024-05-15 08:35:12.930538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.930 [2024-05-15 08:35:12.930549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ee8a0 with addr=10.0.0.2, port=4420 00:23:25.930 [2024-05-15 08:35:12.930556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ee8a0 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.930747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ee8a0 (9): Bad file descriptor 00:23:25.930 [2024-05-15 08:35:12.930778] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:25.930 [2024-05-15 08:35:12.930786] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:25.930 [2024-05-15 08:35:12.930793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:25.930 [2024-05-15 08:35:12.930823] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.930 [2024-05-15 08:35:12.931919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.930 [2024-05-15 08:35:12.931930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.930 [2024-05-15 08:35:12.931939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.930 [2024-05-15 08:35:12.931946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.930 [2024-05-15 08:35:12.931955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.930 [2024-05-15 08:35:12.931961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.930 [2024-05-15 08:35:12.931969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.930 [2024-05-15 08:35:12.931976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.930 [2024-05-15 08:35:12.931982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13db2f0 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.932005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.930 [2024-05-15 08:35:12.932014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.930 [2024-05-15 08:35:12.932022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.930 [2024-05-15 08:35:12.932029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.930 [2024-05-15 08:35:12.932038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.930 [2024-05-15 08:35:12.932045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.930 [2024-05-15 08:35:12.932053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.930 [2024-05-15 08:35:12.932059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.930 [2024-05-15 08:35:12.932066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1838a00 is same with the state(5) to be set 00:23:25.930 [2024-05-15 08:35:12.932080] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a3980 (9): Bad file descriptor 00:23:25.930 [2024-05-15 08:35:12.932104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.930 [2024-05-15 08:35:12.932114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.930 [2024-05-15 08:35:12.932124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.931 [2024-05-15 08:35:12.932131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.932140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.931 [2024-05-15 08:35:12.932148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.932155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.931 [2024-05-15 08:35:12.932162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.932173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18394f0 is same with the state(5) to be set 00:23:25.931 [2024-05-15 08:35:12.932193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ec730 (9): Bad file descriptor 00:23:25.931 [2024-05-15 08:35:12.932206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1819a30 (9): Bad file descriptor 00:23:25.931 [2024-05-15 08:35:12.937482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.931 [2024-05-15 08:35:12.937494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.931 [2024-05-15 08:35:12.937503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.931 [2024-05-15 08:35:12.937510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.931 [2024-05-15 08:35:12.937518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.931 [2024-05-15 08:35:12.937525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.931 [2024-05-15 08:35:12.937531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.931 [2024-05-15 08:35:12.937538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.931 [2024-05-15 08:35:12.937545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.931 [2024-05-15 08:35:12.937551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.931 [2024-05-15 08:35:12.937557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.931 [2024-05-15 08:35:12.937563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.931 [2024-05-15 08:35:12.937570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.931 [2024-05-15 08:35:12.937577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.931 [2024-05-15 08:35:12.937583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.931 [2024-05-15 08:35:12.937589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea140 is same with the state(5) to be set 00:23:25.931 [2024-05-15 08:35:12.937846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.937858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.937872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.937879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.937889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.937896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.937905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.937914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.937923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.937930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.937938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.937945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.937954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.937961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.937970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.937976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.937985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.937992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.938001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.938007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.938016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.938023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.938031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.938038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.938047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.938054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.938062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.938074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.938083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.938090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.938098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.938104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.938113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.938120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.938128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.938136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.938144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.938151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.938159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.938170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.938180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.938187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.938195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.938202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.938211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.938218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.938229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.938236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.938244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.938252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.938261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-05-15 08:35:12.938267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.931 [2024-05-15 08:35:12.938278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.932 [2024-05-15 08:35:12.938285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.932 [2024-05-15 08:35:12.938294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.932 [2024-05-15 08:35:12.938301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.932 [2024-05-15 08:35:12.938309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.932 [2024-05-15 08:35:12.938316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.932 [2024-05-15 08:35:12.938324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.932 [2024-05-15 08:35:12.938331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.932 [2024-05-15 08:35:12.938339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.932 [2024-05-15 08:35:12.938345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.932 [2024-05-15 08:35:12.938354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.932 [2024-05-15 08:35:12.938361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.932 [2024-05-15 08:35:12.938370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.932 [2024-05-15 08:35:12.938376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.932 [2024-05-15 08:35:12.938384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.932 [2024-05-15 08:35:12.938391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.932 [2024-05-15 08:35:12.938400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.932 [2024-05-15 08:35:12.938407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.932 [2024-05-15 08:35:12.938415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.932 [2024-05-15 08:35:12.938422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.932 [2024-05-15 08:35:12.938430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.932 [2024-05-15 08:35:12.938437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.932 [2024-05-15 08:35:12.938445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.932 [2024-05-15 08:35:12.938451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.932 [2024-05-15 08:35:12.938465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a140 is same with the state(5) to be set 00:23:25.932 [2024-05-15 08:35:12.938519] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x198a140 was disconnected and freed. reset controller. 00:23:25.932 [2024-05-15 08:35:12.939383] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:25.932 [2024-05-15 08:35:12.939419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ace30 (9): Bad file descriptor 00:23:25.932 [2024-05-15 08:35:12.939462] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:26.195 [2024-05-15 08:35:12.939720] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:26.195 [2024-05-15 08:35:12.939959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.195 [2024-05-15 08:35:12.940163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.195 [2024-05-15 08:35:12.940180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ace30 with addr=10.0.0.2, port=4420 00:23:26.195 [2024-05-15 08:35:12.940187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ace30 is same with the state(5) to be set 00:23:26.195 [2024-05-15 08:35:12.940379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.195 [2024-05-15 08:35:12.940528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.195 [2024-05-15 08:35:12.940538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f3610 with addr=10.0.0.2, port=4420 00:23:26.195 [2024-05-15 08:35:12.940545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3610 is same with the state(5) to be set 00:23:26.195 [2024-05-15 08:35:12.940768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.195 [2024-05-15 08:35:12.940941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.195 [2024-05-15 08:35:12.940950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b54c0 with addr=10.0.0.2, port=4420 00:23:26.195 [2024-05-15 08:35:12.940957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b54c0 is same with the state(5) to be set 00:23:26.195 [2024-05-15 08:35:12.940965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ace30 (9): Bad file descriptor 00:23:26.195 [2024-05-15 08:35:12.940974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f3610 (9): Bad file descriptor 00:23:26.195 [2024-05-15 08:35:12.941015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b54c0 (9): Bad file descriptor 00:23:26.195 [2024-05-15 08:35:12.941025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:26.195 [2024-05-15 08:35:12.941031] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:26.195 [2024-05-15 08:35:12.941039] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:26.195 [2024-05-15 08:35:12.941049] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:26.195 [2024-05-15 08:35:12.941055] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:26.195 [2024-05-15 08:35:12.941061] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:26.195 [2024-05-15 08:35:12.941098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:26.195 [2024-05-15 08:35:12.941108] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:26.195 [2024-05-15 08:35:12.941114] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:26.195 [2024-05-15 08:35:12.941124] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:26.195 [2024-05-15 08:35:12.941131] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:26.195 [2024-05-15 08:35:12.941144] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:26.195 [2024-05-15 08:35:12.941178] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:26.195 [2024-05-15 08:35:12.941420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.195 [2024-05-15 08:35:12.941638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.195 [2024-05-15 08:35:12.941647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ee8a0 with addr=10.0.0.2, port=4420 00:23:26.195 [2024-05-15 08:35:12.941654] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ee8a0 is same with the state(5) to be set 00:23:26.195 [2024-05-15 08:35:12.941682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ee8a0 (9): Bad file descriptor 00:23:26.195 [2024-05-15 08:35:12.941709] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:26.195 [2024-05-15 08:35:12.941716] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:26.195 [2024-05-15 08:35:12.941723] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:26.195 [2024-05-15 08:35:12.941751] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:26.195 [2024-05-15 08:35:12.941944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13db2f0 (9): Bad file descriptor 00:23:26.195 [2024-05-15 08:35:12.941959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1838a00 (9): Bad file descriptor 00:23:26.195 [2024-05-15 08:35:12.941980] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18394f0 (9): Bad file descriptor 00:23:26.195 [2024-05-15 08:35:12.942063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.195 [2024-05-15 08:35:12.942073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.195 [2024-05-15 08:35:12.942083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-05-15 08:35:12.942592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.196 [2024-05-15 08:35:12.942600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.942988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.942995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.943004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.943011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.943019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.943026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.943035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.943042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.943050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.943057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.943064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1993a80 is same with the state(5) to be set 00:23:26.197 [2024-05-15 08:35:12.944062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.944074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.944084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.944091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.944100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.944106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.944115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.944122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-05-15 08:35:12.944130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-05-15 08:35:12.944136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-05-15 08:35:12.944645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.198 [2024-05-15 08:35:12.944653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.944990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.944998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.945005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.949611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.949626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.949635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.949642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.949650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.949658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.949666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.949673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.949680] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192d490 is same with the state(5) to be set 00:23:26.199 [2024-05-15 08:35:12.950688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.950705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.950717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.950724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.950733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.950740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.950752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.950760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.950768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.950775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.950784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.950791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.950800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.950807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.950815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.950824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.950832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-05-15 08:35:12.950839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.199 [2024-05-15 08:35:12.950848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.950855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.950863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.950870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.950878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.950885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.950894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.950901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.950909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.950916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.950924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.950931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.950942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.950954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.950963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.950970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.950979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.950986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.950995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-05-15 08:35:12.951326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-05-15 08:35:12.951335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.951724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.951732] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b5e0 is same with the state(5) to be set 00:23:26.201 [2024-05-15 08:35:12.954612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:26.201 [2024-05-15 08:35:12.954643] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:26.201 [2024-05-15 08:35:12.954654] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:26.201 [2024-05-15 08:35:12.954984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.201 [2024-05-15 08:35:12.955151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.201 [2024-05-15 08:35:12.955168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ec730 with addr=10.0.0.2, port=4420 00:23:26.201 [2024-05-15 08:35:12.955177] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ec730 is same with the state(5) to be set 00:23:26.201 [2024-05-15 08:35:12.955344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.201 [2024-05-15 08:35:12.955561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.201 [2024-05-15 08:35:12.955573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1819a30 with addr=10.0.0.2, port=4420 00:23:26.201 [2024-05-15 08:35:12.955581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819a30 is same with the state(5) to be set 00:23:26.201 [2024-05-15 08:35:12.955678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.201 [2024-05-15 08:35:12.955750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.201 [2024-05-15 08:35:12.955761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3980 with addr=10.0.0.2, port=4420 00:23:26.201 [2024-05-15 08:35:12.955769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a3980 is same with the state(5) to be set 00:23:26.201 [2024-05-15 08:35:12.956245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.956259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.956272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.956280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.956290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.956297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.956307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.956314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.201 [2024-05-15 08:35:12.956323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-05-15 08:35:12.956331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.956989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.956999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.957010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.957020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.202 [2024-05-15 08:35:12.957032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-05-15 08:35:12.957041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.957523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.957533] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192ea20 is same with the state(5) to be set 00:23:26.203 [2024-05-15 08:35:12.958860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.958878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.958891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.958901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.958913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.958922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.958934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.958943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.958955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.958965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.958976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.958985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.958999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.959009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.959020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.959029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.959040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.959051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.959062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.959071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.959082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.959091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.959104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.959113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.959124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.959133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.959144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.959153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.203 [2024-05-15 08:35:12.959169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-05-15 08:35:12.959178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-05-15 08:35:12.959887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-05-15 08:35:12.959899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-05-15 08:35:12.959908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.205 [2024-05-15 08:35:12.959920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-05-15 08:35:12.959929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.205 [2024-05-15 08:35:12.959940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-05-15 08:35:12.959950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.205 [2024-05-15 08:35:12.959964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-05-15 08:35:12.959973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.205 [2024-05-15 08:35:12.959985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-05-15 08:35:12.959995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.205 [2024-05-15 08:35:12.960007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-05-15 08:35:12.960016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.205 [2024-05-15 08:35:12.960027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-05-15 08:35:12.960036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.205 [2024-05-15 08:35:12.960050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-05-15 08:35:12.960060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.205 [2024-05-15 08:35:12.960071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-05-15 08:35:12.960080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.205 [2024-05-15 08:35:12.960092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-05-15 08:35:12.960103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.205 [2024-05-15 08:35:12.960115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-05-15 08:35:12.960125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.205 [2024-05-15 08:35:12.960137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-05-15 08:35:12.960147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.205 [2024-05-15 08:35:12.960158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-05-15 08:35:12.960172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.205 [2024-05-15 08:35:12.960185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-05-15 08:35:12.960194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.205 [2024-05-15 08:35:12.960206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-05-15 08:35:12.960215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.205 [2024-05-15 08:35:12.960225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e71f0 is same with the state(5) to be set 00:23:26.205 [2024-05-15 08:35:12.962124] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:26.205 [2024-05-15 08:35:12.962147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:26.205 [2024-05-15 08:35:12.962160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:26.205 [2024-05-15 08:35:12.962177] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:26.205 [2024-05-15 08:35:12.962188] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:26.205 [2024-05-15 08:35:12.962200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:26.205 task offset: 25472 on job bdev=Nvme1n1 fails 00:23:26.205 00:23:26.205 Latency(us) 00:23:26.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.205 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.205 Job: Nvme1n1 ended in about 0.87 seconds with error 00:23:26.205 Verification LBA range: start 0x0 length 0x400 00:23:26.205 Nvme1n1 : 0.87 221.80 13.86 73.93 0.00 214131.34 2222.53 225215.89 00:23:26.205 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.205 Job: Nvme2n1 ended in about 0.89 seconds with error 00:23:26.205 Verification LBA range: start 0x0 length 0x400 00:23:26.205 Nvme2n1 : 0.89 219.93 13.75 71.82 0.00 213174.80 5527.82 221568.67 00:23:26.205 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.205 Job: Nvme3n1 ended in about 0.90 seconds with error 00:23:26.205 Verification LBA range: start 0x0 length 0x400 00:23:26.205 Nvme3n1 : 0.90 213.86 13.37 71.29 0.00 214176.06 16640.45 215186.03 00:23:26.205 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.205 Job: Nvme4n1 ended in about 0.91 seconds with error 00:23:26.205 Verification LBA range: start 0x0 length 0x400 00:23:26.205 Nvme4n1 : 0.91 211.99 13.25 70.66 0.00 212182.15 28379.94 204244.37 00:23:26.205 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.205 Verification LBA range: start 0x0 length 0x400 00:23:26.205 Nvme5n1 : 0.87 221.00 13.81 0.00 0.00 265394.68 18692.01 227039.50 00:23:26.205 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.205 Job: Nvme6n1 ended in about 0.91 seconds with error 00:23:26.205 Verification LBA range: start 0x0 length 0x400 00:23:26.205 Nvme6n1 : 0.91 211.37 13.21 70.46 0.00 204986.10 21313.45 221568.67 00:23:26.205 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.205 Verification LBA range: start 0x0 length 0x400 00:23:26.205 Nvme7n1 : 0.87 297.50 18.59 0.00 0.00 188562.63 2008.82 212450.62 00:23:26.205 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.205 Job: Nvme8n1 ended in about 0.88 seconds with error 00:23:26.205 Verification LBA range: start 0x0 length 0x400 00:23:26.205 Nvme8n1 : 0.88 293.44 18.34 4.57 0.00 184460.79 1111.26 196038.12 00:23:26.205 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.205 Job: Nvme9n1 ended in about 0.89 seconds with error 00:23:26.205 Verification LBA range: start 0x0 length 0x400 00:23:26.205 Nvme9n1 : 0.89 259.43 16.21 42.86 0.00 178430.80 8491.19 177802.02 00:23:26.205 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.205 Job: Nvme10n1 ended in about 0.90 seconds with error 00:23:26.205 Verification LBA range: start 0x0 length 0x400 00:23:26.205 Nvme10n1 : 0.90 142.25 8.89 71.13 0.00 249349.05 23365.01 242540.19 00:23:26.205 =================================================================================================================== 00:23:26.205 Total : 2292.58 143.29 476.71 0.00 209885.23 1111.26 242540.19 00:23:26.205 [2024-05-15 08:35:12.986245] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:26.205 [2024-05-15 08:35:12.986336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ec730 (9): Bad file descriptor 00:23:26.206 [2024-05-15 08:35:12.986351] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1819a30 (9): Bad file descriptor 00:23:26.206 [2024-05-15 08:35:12.986360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a3980 (9): Bad file descriptor 00:23:26.206 [2024-05-15 08:35:12.986685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:26.206 [2024-05-15 08:35:12.986981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.206 [2024-05-15 08:35:12.987157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.206 [2024-05-15 08:35:12.987175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f3610 with addr=10.0.0.2, port=4420 00:23:26.206 [2024-05-15 08:35:12.987185] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3610 is same with the state(5) to be set 00:23:26.206 [2024-05-15 08:35:12.987399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.206 [2024-05-15 08:35:12.987488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.206 [2024-05-15 08:35:12.987504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ace30 with addr=10.0.0.2, port=4420 00:23:26.206 [2024-05-15 08:35:12.987511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ace30 is same with the state(5) to be set 00:23:26.206 [2024-05-15 08:35:12.987731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.206 [2024-05-15 08:35:12.987953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.206 [2024-05-15 08:35:12.987965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b54c0 with addr=10.0.0.2, port=4420 00:23:26.206 [2024-05-15 08:35:12.987971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b54c0 is same with the state(5) to be set 00:23:26.206 [2024-05-15 08:35:12.988183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.206 [2024-05-15 08:35:12.988386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.206 [2024-05-15 08:35:12.988397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ee8a0 with addr=10.0.0.2, port=4420 00:23:26.206 [2024-05-15 08:35:12.988404] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ee8a0 is same with the state(5) to be set 00:23:26.206 [2024-05-15 08:35:12.988564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.206 [2024-05-15 08:35:12.988812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.206 [2024-05-15 08:35:12.988823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1838a00 with addr=10.0.0.2, port=4420 00:23:26.206 [2024-05-15 08:35:12.988831] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1838a00 is same with the state(5) to be set 00:23:26.206 [2024-05-15 08:35:12.988998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.206 [2024-05-15 08:35:12.989208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.206 [2024-05-15 08:35:12.989220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13db2f0 with addr=10.0.0.2, port=4420 00:23:26.206 [2024-05-15 08:35:12.989227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13db2f0 is same with the state(5) to be set 00:23:26.206 [2024-05-15 08:35:12.989236] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:26.206 [2024-05-15 08:35:12.989242] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:26.206 [2024-05-15 08:35:12.989251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:26.206 [2024-05-15 08:35:12.989265] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:26.206 [2024-05-15 08:35:12.989271] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:26.206 [2024-05-15 08:35:12.989277] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:26.206 [2024-05-15 08:35:12.989287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:26.206 [2024-05-15 08:35:12.989294] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:26.206 [2024-05-15 08:35:12.989300] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:26.206 [2024-05-15 08:35:12.989321] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:26.206 [2024-05-15 08:35:12.989334] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:26.206 [2024-05-15 08:35:12.989346] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:26.206 [2024-05-15 08:35:12.989858] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:26.206 [2024-05-15 08:35:12.989873] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:26.206 [2024-05-15 08:35:12.989879] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:26.206 [2024-05-15 08:35:12.990025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.206 [2024-05-15 08:35:12.990158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.206 [2024-05-15 08:35:12.990183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18394f0 with addr=10.0.0.2, port=4420 00:23:26.206 [2024-05-15 08:35:12.990191] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18394f0 is same with the state(5) to be set 00:23:26.206 [2024-05-15 08:35:12.990203] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f3610 (9): Bad file descriptor 00:23:26.206 [2024-05-15 08:35:12.990213] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ace30 (9): Bad file descriptor 00:23:26.206 [2024-05-15 08:35:12.990222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b54c0 (9): Bad file descriptor 00:23:26.206 [2024-05-15 08:35:12.990231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ee8a0 (9): Bad file descriptor 00:23:26.206 [2024-05-15 08:35:12.990239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1838a00 (9): Bad file descriptor 00:23:26.206 [2024-05-15 08:35:12.990248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13db2f0 (9): Bad file descriptor 00:23:26.206 [2024-05-15 08:35:12.990298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18394f0 (9): Bad file descriptor 00:23:26.206 [2024-05-15 08:35:12.990308] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:26.206 [2024-05-15 08:35:12.990314] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:26.206 [2024-05-15 08:35:12.990321] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:26.206 [2024-05-15 08:35:12.990332] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:26.206 [2024-05-15 08:35:12.990340] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:26.206 [2024-05-15 08:35:12.990346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:26.206 [2024-05-15 08:35:12.990355] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:26.206 [2024-05-15 08:35:12.990362] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:26.206 [2024-05-15 08:35:12.990368] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:26.206 [2024-05-15 08:35:12.990376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:26.206 [2024-05-15 08:35:12.990383] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:26.206 [2024-05-15 08:35:12.990389] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:26.206 [2024-05-15 08:35:12.990398] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:26.206 [2024-05-15 08:35:12.990404] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:26.206 [2024-05-15 08:35:12.990410] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:26.206 [2024-05-15 08:35:12.990419] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:26.206 [2024-05-15 08:35:12.990425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:26.206 [2024-05-15 08:35:12.990436] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:26.206 [2024-05-15 08:35:12.990471] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:26.206 [2024-05-15 08:35:12.990479] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:26.206 [2024-05-15 08:35:12.990484] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:26.206 [2024-05-15 08:35:12.990490] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:26.206 [2024-05-15 08:35:12.990497] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:26.206 [2024-05-15 08:35:12.990503] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:26.207 [2024-05-15 08:35:12.990511] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:26.207 [2024-05-15 08:35:12.990517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:26.207 [2024-05-15 08:35:12.990523] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:26.207 [2024-05-15 08:35:12.990551] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:26.465 08:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:26.465 08:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:27.402 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 358324 00:23:27.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (358324) - No such process 00:23:27.402 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:27.402 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:27.402 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:27.402 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:27.402 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:27.402 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:27.402 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:27.402 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:27.402 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:27.402 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:27.402 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:27.402 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:27.402 rmmod nvme_tcp 00:23:27.402 rmmod nvme_fabrics 00:23:27.660 rmmod nvme_keyring 00:23:27.660 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:27.660 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:27.660 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:27.660 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:27.660 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:27.660 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:27.660 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:27.660 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:27.660 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:27.660 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.660 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.660 08:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.564 08:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:29.564 00:23:29.564 real 0m7.779s 00:23:29.564 user 0m19.152s 00:23:29.564 sys 0m1.258s 00:23:29.564 08:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:29.564 08:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:29.564 ************************************ 00:23:29.564 END TEST nvmf_shutdown_tc3 00:23:29.564 ************************************ 00:23:29.564 08:35:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:29.564 00:23:29.564 real 0m31.260s 00:23:29.564 user 1m19.728s 00:23:29.564 sys 0m8.055s 00:23:29.564 08:35:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:29.564 08:35:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:29.564 ************************************ 00:23:29.564 END TEST nvmf_shutdown 00:23:29.564 ************************************ 00:23:29.823 08:35:16 nvmf_tcp -- nvmf/nvmf.sh@84 -- # timing_exit target 00:23:29.823 08:35:16 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:29.823 08:35:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:29.823 08:35:16 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_enter host 00:23:29.823 08:35:16 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:29.823 08:35:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:29.823 08:35:16 nvmf_tcp -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:23:29.823 08:35:16 nvmf_tcp -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:29.823 08:35:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:29.823 08:35:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:29.823 08:35:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:29.823 ************************************ 00:23:29.823 START TEST nvmf_multicontroller 00:23:29.824 ************************************ 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:29.824 * Looking for test storage... 00:23:29.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:29.824 08:35:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.094 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.094 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:35.094 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:35.094 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:35.094 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:35.094 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:35.094 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:35.094 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:35.094 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:35.094 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:35.094 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:35.094 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:35.094 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:35.094 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:35.094 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:35.094 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:35.095 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:35.095 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:35.095 Found net devices under 0000:86:00.0: cvl_0_0 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:35.095 Found net devices under 0000:86:00.1: cvl_0_1 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.095 08:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:35.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:23:35.095 00:23:35.095 --- 10.0.0.2 ping statistics --- 00:23:35.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.095 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:23:35.095 00:23:35.095 --- 10.0.0.1 ping statistics --- 00:23:35.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.095 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=362581 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 362581 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 362581 ']' 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:35.095 08:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.096 [2024-05-15 08:35:22.114426] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:23:35.096 [2024-05-15 08:35:22.114470] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.355 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.355 [2024-05-15 08:35:22.168614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:35.355 [2024-05-15 08:35:22.247052] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.355 [2024-05-15 08:35:22.247086] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.355 [2024-05-15 08:35:22.247094] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.355 [2024-05-15 08:35:22.247100] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.355 [2024-05-15 08:35:22.247106] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.355 [2024-05-15 08:35:22.247278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.355 [2024-05-15 08:35:22.247299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.355 [2024-05-15 08:35:22.247299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.923 08:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:35.923 08:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:23:35.923 08:35:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:35.923 08:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.923 08:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.183 08:35:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.183 08:35:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:36.183 08:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.183 08:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.183 [2024-05-15 08:35:22.972200] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.183 08:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.183 08:35:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:36.183 08:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.183 08:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.183 Malloc0 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.183 [2024-05-15 08:35:23.030682] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:36.183 [2024-05-15 08:35:23.030895] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.183 [2024-05-15 08:35:23.038815] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.183 Malloc1 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=362631 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 362631 /var/tmp/bdevperf.sock 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 362631 ']' 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:36.183 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.119 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:37.119 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:23:37.119 08:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:37.119 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.119 08:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.119 NVMe0n1 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.119 1 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.119 request: 00:23:37.119 { 00:23:37.119 "name": "NVMe0", 00:23:37.119 "trtype": "tcp", 00:23:37.119 "traddr": "10.0.0.2", 00:23:37.119 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:37.119 "hostaddr": "10.0.0.2", 00:23:37.119 "hostsvcid": "60000", 00:23:37.119 "adrfam": "ipv4", 00:23:37.119 "trsvcid": "4420", 00:23:37.119 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.119 "method": "bdev_nvme_attach_controller", 00:23:37.119 "req_id": 1 00:23:37.119 } 00:23:37.119 Got JSON-RPC error response 00:23:37.119 response: 00:23:37.119 { 00:23:37.119 "code": -114, 00:23:37.119 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:37.119 } 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:37.119 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.120 request: 00:23:37.120 { 00:23:37.120 "name": "NVMe0", 00:23:37.120 "trtype": "tcp", 00:23:37.120 "traddr": "10.0.0.2", 00:23:37.120 "hostaddr": "10.0.0.2", 00:23:37.120 "hostsvcid": "60000", 00:23:37.120 "adrfam": "ipv4", 00:23:37.120 "trsvcid": "4420", 00:23:37.120 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:37.120 "method": "bdev_nvme_attach_controller", 00:23:37.120 "req_id": 1 00:23:37.120 } 00:23:37.120 Got JSON-RPC error response 00:23:37.120 response: 00:23:37.120 { 00:23:37.120 "code": -114, 00:23:37.120 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:37.120 } 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.120 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.379 request: 00:23:37.379 { 00:23:37.379 "name": "NVMe0", 00:23:37.379 "trtype": "tcp", 00:23:37.379 "traddr": "10.0.0.2", 00:23:37.379 "hostaddr": "10.0.0.2", 00:23:37.379 "hostsvcid": "60000", 00:23:37.379 "adrfam": "ipv4", 00:23:37.379 "trsvcid": "4420", 00:23:37.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.379 "multipath": "disable", 00:23:37.379 "method": "bdev_nvme_attach_controller", 00:23:37.379 "req_id": 1 00:23:37.379 } 00:23:37.379 Got JSON-RPC error response 00:23:37.379 response: 00:23:37.379 { 00:23:37.379 "code": -114, 00:23:37.379 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:37.379 } 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.379 request: 00:23:37.379 { 00:23:37.379 "name": "NVMe0", 00:23:37.379 "trtype": "tcp", 00:23:37.379 "traddr": "10.0.0.2", 00:23:37.379 "hostaddr": "10.0.0.2", 00:23:37.379 "hostsvcid": "60000", 00:23:37.379 "adrfam": "ipv4", 00:23:37.379 "trsvcid": "4420", 00:23:37.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.379 "multipath": "failover", 00:23:37.379 "method": "bdev_nvme_attach_controller", 00:23:37.379 "req_id": 1 00:23:37.379 } 00:23:37.379 Got JSON-RPC error response 00:23:37.379 response: 00:23:37.379 { 00:23:37.379 "code": -114, 00:23:37.379 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:37.379 } 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.379 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.379 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.638 00:23:37.638 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.638 08:35:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:37.638 08:35:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:37.638 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.638 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.638 08:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.638 08:35:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:37.638 08:35:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:39.018 0 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 362631 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 362631 ']' 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 362631 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 362631 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 362631' 00:23:39.018 killing process with pid 362631 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 362631 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 362631 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:23:39.018 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:39.018 [2024-05-15 08:35:23.140861] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:23:39.018 [2024-05-15 08:35:23.140907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362631 ] 00:23:39.018 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.018 [2024-05-15 08:35:23.194921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.018 [2024-05-15 08:35:23.270291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.018 [2024-05-15 08:35:24.519261] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 5a058034-3e47-4896-a4f1-6db23914155b already exists 00:23:39.018 [2024-05-15 08:35:24.519287] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:5a058034-3e47-4896-a4f1-6db23914155b alias for bdev NVMe1n1 00:23:39.018 [2024-05-15 08:35:24.519296] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:39.018 Running I/O for 1 seconds... 00:23:39.018 00:23:39.018 Latency(us) 00:23:39.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.018 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:39.018 NVMe0n1 : 1.00 24399.59 95.31 0.00 0.00 5239.34 1552.92 9175.04 00:23:39.018 =================================================================================================================== 00:23:39.018 Total : 24399.59 95.31 0.00 0.00 5239.34 1552.92 9175.04 00:23:39.018 Received shutdown signal, test time was about 1.000000 seconds 00:23:39.018 00:23:39.018 Latency(us) 00:23:39.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.018 =================================================================================================================== 00:23:39.018 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:39.018 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:39.018 08:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:39.018 rmmod nvme_tcp 00:23:39.018 rmmod nvme_fabrics 00:23:39.018 rmmod nvme_keyring 00:23:39.018 08:35:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:39.018 08:35:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:39.018 08:35:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:39.018 08:35:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 362581 ']' 00:23:39.018 08:35:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 362581 00:23:39.018 08:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 362581 ']' 00:23:39.018 08:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 362581 00:23:39.018 08:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:23:39.018 08:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:39.018 08:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 362581 00:23:39.278 08:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:39.278 08:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:39.278 08:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 362581' 00:23:39.278 killing process with pid 362581 00:23:39.278 08:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 362581 00:23:39.278 [2024-05-15 08:35:26.078042] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:39.278 08:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 362581 00:23:39.537 08:35:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:39.537 08:35:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:39.537 08:35:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:39.537 08:35:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:39.537 08:35:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:39.537 08:35:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.537 08:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.537 08:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.445 08:35:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:41.445 00:23:41.445 real 0m11.724s 00:23:41.445 user 0m16.780s 00:23:41.445 sys 0m4.710s 00:23:41.445 08:35:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:41.445 08:35:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.445 ************************************ 00:23:41.445 END TEST nvmf_multicontroller 00:23:41.445 ************************************ 00:23:41.445 08:35:28 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:41.445 08:35:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:41.445 08:35:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:41.445 08:35:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:41.705 ************************************ 00:23:41.705 START TEST nvmf_aer 00:23:41.705 ************************************ 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:41.705 * Looking for test storage... 00:23:41.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:41.705 08:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:46.984 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:46.984 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:46.984 Found net devices under 0000:86:00.0: cvl_0_0 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:46.984 Found net devices under 0000:86:00.1: cvl_0_1 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:46.984 08:35:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:47.244 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:47.244 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:47.244 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:47.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:47.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:23:47.244 00:23:47.244 --- 10.0.0.2 ping statistics --- 00:23:47.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.244 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:47.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:47.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:23:47.245 00:23:47.245 --- 10.0.0.1 ping statistics --- 00:23:47.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.245 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=366605 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 366605 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 366605 ']' 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:47.245 08:35:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.245 [2024-05-15 08:35:34.154918] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:23:47.245 [2024-05-15 08:35:34.154958] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.245 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.245 [2024-05-15 08:35:34.213851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:47.504 [2024-05-15 08:35:34.292175] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.504 [2024-05-15 08:35:34.292207] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.504 [2024-05-15 08:35:34.292216] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.504 [2024-05-15 08:35:34.292222] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.504 [2024-05-15 08:35:34.292227] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.504 [2024-05-15 08:35:34.292288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.504 [2024-05-15 08:35:34.292365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:47.505 [2024-05-15 08:35:34.292465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:47.505 [2024-05-15 08:35:34.292466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.074 08:35:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:48.074 08:35:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:23:48.074 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:48.074 08:35:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:48.074 08:35:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.074 08:35:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.074 08:35:34 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:48.074 08:35:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.074 08:35:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.074 [2024-05-15 08:35:35.004984] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.074 Malloc0 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.074 [2024-05-15 08:35:35.059461] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:48.074 [2024-05-15 08:35:35.059693] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.074 [ 00:23:48.074 { 00:23:48.074 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:48.074 "subtype": "Discovery", 00:23:48.074 "listen_addresses": [], 00:23:48.074 "allow_any_host": true, 00:23:48.074 "hosts": [] 00:23:48.074 }, 00:23:48.074 { 00:23:48.074 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.074 "subtype": "NVMe", 00:23:48.074 "listen_addresses": [ 00:23:48.074 { 00:23:48.074 "trtype": "TCP", 00:23:48.074 "adrfam": "IPv4", 00:23:48.074 "traddr": "10.0.0.2", 00:23:48.074 "trsvcid": "4420" 00:23:48.074 } 00:23:48.074 ], 00:23:48.074 "allow_any_host": true, 00:23:48.074 "hosts": [], 00:23:48.074 "serial_number": "SPDK00000000000001", 00:23:48.074 "model_number": "SPDK bdev Controller", 00:23:48.074 "max_namespaces": 2, 00:23:48.074 "min_cntlid": 1, 00:23:48.074 "max_cntlid": 65519, 00:23:48.074 "namespaces": [ 00:23:48.074 { 00:23:48.074 "nsid": 1, 00:23:48.074 "bdev_name": "Malloc0", 00:23:48.074 "name": "Malloc0", 00:23:48.074 "nguid": "FD020BB39B8345A38E5E7C99531FA3FD", 00:23:48.074 "uuid": "fd020bb3-9b83-45a3-8e5e-7c99531fa3fd" 00:23:48.074 } 00:23:48.074 ] 00:23:48.074 } 00:23:48.074 ] 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=366854 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:23:48.074 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:23:48.335 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.335 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:48.335 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:23:48.335 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:23:48.335 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:23:48.335 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:48.335 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:48.335 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:23:48.335 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:48.335 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.335 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.335 Malloc1 00:23:48.335 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.335 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:48.335 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.335 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.335 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.335 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:48.335 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.335 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.335 Asynchronous Event Request test 00:23:48.335 Attaching to 10.0.0.2 00:23:48.335 Attached to 10.0.0.2 00:23:48.335 Registering asynchronous event callbacks... 00:23:48.335 Starting namespace attribute notice tests for all controllers... 00:23:48.335 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:48.335 aer_cb - Changed Namespace 00:23:48.335 Cleaning up... 00:23:48.335 [ 00:23:48.335 { 00:23:48.335 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:48.335 "subtype": "Discovery", 00:23:48.335 "listen_addresses": [], 00:23:48.335 "allow_any_host": true, 00:23:48.335 "hosts": [] 00:23:48.335 }, 00:23:48.335 { 00:23:48.335 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.335 "subtype": "NVMe", 00:23:48.335 "listen_addresses": [ 00:23:48.335 { 00:23:48.335 "trtype": "TCP", 00:23:48.335 "adrfam": "IPv4", 00:23:48.335 "traddr": "10.0.0.2", 00:23:48.335 "trsvcid": "4420" 00:23:48.335 } 00:23:48.335 ], 00:23:48.335 "allow_any_host": true, 00:23:48.335 "hosts": [], 00:23:48.335 "serial_number": "SPDK00000000000001", 00:23:48.335 "model_number": "SPDK bdev Controller", 00:23:48.335 "max_namespaces": 2, 00:23:48.335 "min_cntlid": 1, 00:23:48.335 "max_cntlid": 65519, 00:23:48.335 "namespaces": [ 00:23:48.335 { 00:23:48.335 "nsid": 1, 00:23:48.335 "bdev_name": "Malloc0", 00:23:48.335 "name": "Malloc0", 00:23:48.335 "nguid": "FD020BB39B8345A38E5E7C99531FA3FD", 00:23:48.335 "uuid": "fd020bb3-9b83-45a3-8e5e-7c99531fa3fd" 00:23:48.335 }, 00:23:48.335 { 00:23:48.335 "nsid": 2, 00:23:48.335 "bdev_name": "Malloc1", 00:23:48.595 "name": "Malloc1", 00:23:48.595 "nguid": "A8378D1524D74B0DBE6C21DDA3F52705", 00:23:48.595 "uuid": "a8378d15-24d7-4b0d-be6c-21dda3f52705" 00:23:48.595 } 00:23:48.595 ] 00:23:48.595 } 00:23:48.595 ] 00:23:48.595 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.595 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 366854 00:23:48.595 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:48.595 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.595 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.595 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.595 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:48.595 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.595 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:48.596 rmmod nvme_tcp 00:23:48.596 rmmod nvme_fabrics 00:23:48.596 rmmod nvme_keyring 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 366605 ']' 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 366605 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 366605 ']' 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 366605 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 366605 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 366605' 00:23:48.596 killing process with pid 366605 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 366605 00:23:48.596 [2024-05-15 08:35:35.523757] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:48.596 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 366605 00:23:48.855 08:35:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:48.855 08:35:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:48.855 08:35:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:48.855 08:35:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:48.855 08:35:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:48.855 08:35:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.855 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.855 08:35:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.395 08:35:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:51.395 00:23:51.395 real 0m9.332s 00:23:51.395 user 0m7.188s 00:23:51.395 sys 0m4.637s 00:23:51.395 08:35:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:51.395 08:35:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:51.395 ************************************ 00:23:51.395 END TEST nvmf_aer 00:23:51.395 ************************************ 00:23:51.395 08:35:37 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:51.395 08:35:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:51.395 08:35:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:51.395 08:35:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:51.395 ************************************ 00:23:51.395 START TEST nvmf_async_init 00:23:51.395 ************************************ 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:51.395 * Looking for test storage... 00:23:51.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=cfd882848cf245f784c92598cfabefcb 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.395 08:35:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.395 08:35:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:51.395 08:35:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:51.395 08:35:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:51.395 08:35:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:56.672 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.672 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:56.672 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:56.672 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:56.672 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:56.672 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:56.672 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:56.672 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:56.672 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:56.673 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:56.673 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:56.673 Found net devices under 0000:86:00.0: cvl_0_0 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:56.673 Found net devices under 0000:86:00.1: cvl_0_1 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:56.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:23:56.673 00:23:56.673 --- 10.0.0.2 ping statistics --- 00:23:56.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.673 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:23:56.673 00:23:56.673 --- 10.0.0.1 ping statistics --- 00:23:56.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.673 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=370368 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 370368 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 370368 ']' 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:56.673 08:35:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:56.673 [2024-05-15 08:35:43.334146] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:23:56.673 [2024-05-15 08:35:43.334195] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.673 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.673 [2024-05-15 08:35:43.390354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.673 [2024-05-15 08:35:43.468982] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.673 [2024-05-15 08:35:43.469013] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.673 [2024-05-15 08:35:43.469020] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.673 [2024-05-15 08:35:43.469026] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.673 [2024-05-15 08:35:43.469031] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.673 [2024-05-15 08:35:43.469065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.241 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:57.241 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:23:57.241 08:35:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:57.241 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:57.241 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.241 08:35:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.242 [2024-05-15 08:35:44.151856] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.242 null0 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g cfd882848cf245f784c92598cfabefcb 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.242 [2024-05-15 08:35:44.191913] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:57.242 [2024-05-15 08:35:44.192107] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.242 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.501 nvme0n1 00:23:57.501 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.501 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:57.501 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.501 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.501 [ 00:23:57.501 { 00:23:57.501 "name": "nvme0n1", 00:23:57.501 "aliases": [ 00:23:57.501 "cfd88284-8cf2-45f7-84c9-2598cfabefcb" 00:23:57.501 ], 00:23:57.501 "product_name": "NVMe disk", 00:23:57.501 "block_size": 512, 00:23:57.501 "num_blocks": 2097152, 00:23:57.501 "uuid": "cfd88284-8cf2-45f7-84c9-2598cfabefcb", 00:23:57.501 "assigned_rate_limits": { 00:23:57.501 "rw_ios_per_sec": 0, 00:23:57.501 "rw_mbytes_per_sec": 0, 00:23:57.501 "r_mbytes_per_sec": 0, 00:23:57.501 "w_mbytes_per_sec": 0 00:23:57.501 }, 00:23:57.501 "claimed": false, 00:23:57.501 "zoned": false, 00:23:57.501 "supported_io_types": { 00:23:57.501 "read": true, 00:23:57.501 "write": true, 00:23:57.501 "unmap": false, 00:23:57.501 "write_zeroes": true, 00:23:57.501 "flush": true, 00:23:57.501 "reset": true, 00:23:57.501 "compare": true, 00:23:57.501 "compare_and_write": true, 00:23:57.501 "abort": true, 00:23:57.501 "nvme_admin": true, 00:23:57.501 "nvme_io": true 00:23:57.501 }, 00:23:57.501 "memory_domains": [ 00:23:57.501 { 00:23:57.501 "dma_device_id": "system", 00:23:57.501 "dma_device_type": 1 00:23:57.501 } 00:23:57.501 ], 00:23:57.501 "driver_specific": { 00:23:57.501 "nvme": [ 00:23:57.501 { 00:23:57.501 "trid": { 00:23:57.501 "trtype": "TCP", 00:23:57.501 "adrfam": "IPv4", 00:23:57.501 "traddr": "10.0.0.2", 00:23:57.501 "trsvcid": "4420", 00:23:57.501 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:57.501 }, 00:23:57.501 "ctrlr_data": { 00:23:57.501 "cntlid": 1, 00:23:57.501 "vendor_id": "0x8086", 00:23:57.501 "model_number": "SPDK bdev Controller", 00:23:57.501 "serial_number": "00000000000000000000", 00:23:57.501 "firmware_revision": "24.05", 00:23:57.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:57.501 "oacs": { 00:23:57.501 "security": 0, 00:23:57.501 "format": 0, 00:23:57.501 "firmware": 0, 00:23:57.501 "ns_manage": 0 00:23:57.501 }, 00:23:57.501 "multi_ctrlr": true, 00:23:57.501 "ana_reporting": false 00:23:57.501 }, 00:23:57.501 "vs": { 00:23:57.501 "nvme_version": "1.3" 00:23:57.501 }, 00:23:57.501 "ns_data": { 00:23:57.501 "id": 1, 00:23:57.501 "can_share": true 00:23:57.501 } 00:23:57.501 } 00:23:57.501 ], 00:23:57.501 "mp_policy": "active_passive" 00:23:57.501 } 00:23:57.501 } 00:23:57.501 ] 00:23:57.501 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.501 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:57.501 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.501 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.501 [2024-05-15 08:35:44.440592] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:57.501 [2024-05-15 08:35:44.440646] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb561a0 (9): Bad file descriptor 00:23:57.759 [2024-05-15 08:35:44.572242] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.759 [ 00:23:57.759 { 00:23:57.759 "name": "nvme0n1", 00:23:57.759 "aliases": [ 00:23:57.759 "cfd88284-8cf2-45f7-84c9-2598cfabefcb" 00:23:57.759 ], 00:23:57.759 "product_name": "NVMe disk", 00:23:57.759 "block_size": 512, 00:23:57.759 "num_blocks": 2097152, 00:23:57.759 "uuid": "cfd88284-8cf2-45f7-84c9-2598cfabefcb", 00:23:57.759 "assigned_rate_limits": { 00:23:57.759 "rw_ios_per_sec": 0, 00:23:57.759 "rw_mbytes_per_sec": 0, 00:23:57.759 "r_mbytes_per_sec": 0, 00:23:57.759 "w_mbytes_per_sec": 0 00:23:57.759 }, 00:23:57.759 "claimed": false, 00:23:57.759 "zoned": false, 00:23:57.759 "supported_io_types": { 00:23:57.759 "read": true, 00:23:57.759 "write": true, 00:23:57.759 "unmap": false, 00:23:57.759 "write_zeroes": true, 00:23:57.759 "flush": true, 00:23:57.759 "reset": true, 00:23:57.759 "compare": true, 00:23:57.759 "compare_and_write": true, 00:23:57.759 "abort": true, 00:23:57.759 "nvme_admin": true, 00:23:57.759 "nvme_io": true 00:23:57.759 }, 00:23:57.759 "memory_domains": [ 00:23:57.759 { 00:23:57.759 "dma_device_id": "system", 00:23:57.759 "dma_device_type": 1 00:23:57.759 } 00:23:57.759 ], 00:23:57.759 "driver_specific": { 00:23:57.759 "nvme": [ 00:23:57.759 { 00:23:57.759 "trid": { 00:23:57.759 "trtype": "TCP", 00:23:57.759 "adrfam": "IPv4", 00:23:57.759 "traddr": "10.0.0.2", 00:23:57.759 "trsvcid": "4420", 00:23:57.759 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:57.759 }, 00:23:57.759 "ctrlr_data": { 00:23:57.759 "cntlid": 2, 00:23:57.759 "vendor_id": "0x8086", 00:23:57.759 "model_number": "SPDK bdev Controller", 00:23:57.759 "serial_number": "00000000000000000000", 00:23:57.759 "firmware_revision": "24.05", 00:23:57.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:57.759 "oacs": { 00:23:57.759 "security": 0, 00:23:57.759 "format": 0, 00:23:57.759 "firmware": 0, 00:23:57.759 "ns_manage": 0 00:23:57.759 }, 00:23:57.759 "multi_ctrlr": true, 00:23:57.759 "ana_reporting": false 00:23:57.759 }, 00:23:57.759 "vs": { 00:23:57.759 "nvme_version": "1.3" 00:23:57.759 }, 00:23:57.759 "ns_data": { 00:23:57.759 "id": 1, 00:23:57.759 "can_share": true 00:23:57.759 } 00:23:57.759 } 00:23:57.759 ], 00:23:57.759 "mp_policy": "active_passive" 00:23:57.759 } 00:23:57.759 } 00:23:57.759 ] 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.tWgq35xU9D 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.tWgq35xU9D 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.759 [2024-05-15 08:35:44.621137] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:57.759 [2024-05-15 08:35:44.621246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tWgq35xU9D 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.759 [2024-05-15 08:35:44.629149] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tWgq35xU9D 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.759 [2024-05-15 08:35:44.637176] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:57.759 [2024-05-15 08:35:44.637210] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:57.759 nvme0n1 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.759 [ 00:23:57.759 { 00:23:57.759 "name": "nvme0n1", 00:23:57.759 "aliases": [ 00:23:57.759 "cfd88284-8cf2-45f7-84c9-2598cfabefcb" 00:23:57.759 ], 00:23:57.759 "product_name": "NVMe disk", 00:23:57.759 "block_size": 512, 00:23:57.759 "num_blocks": 2097152, 00:23:57.759 "uuid": "cfd88284-8cf2-45f7-84c9-2598cfabefcb", 00:23:57.759 "assigned_rate_limits": { 00:23:57.759 "rw_ios_per_sec": 0, 00:23:57.759 "rw_mbytes_per_sec": 0, 00:23:57.759 "r_mbytes_per_sec": 0, 00:23:57.759 "w_mbytes_per_sec": 0 00:23:57.759 }, 00:23:57.759 "claimed": false, 00:23:57.759 "zoned": false, 00:23:57.759 "supported_io_types": { 00:23:57.759 "read": true, 00:23:57.759 "write": true, 00:23:57.759 "unmap": false, 00:23:57.759 "write_zeroes": true, 00:23:57.759 "flush": true, 00:23:57.759 "reset": true, 00:23:57.759 "compare": true, 00:23:57.759 "compare_and_write": true, 00:23:57.759 "abort": true, 00:23:57.759 "nvme_admin": true, 00:23:57.759 "nvme_io": true 00:23:57.759 }, 00:23:57.759 "memory_domains": [ 00:23:57.759 { 00:23:57.759 "dma_device_id": "system", 00:23:57.759 "dma_device_type": 1 00:23:57.759 } 00:23:57.759 ], 00:23:57.759 "driver_specific": { 00:23:57.759 "nvme": [ 00:23:57.759 { 00:23:57.759 "trid": { 00:23:57.759 "trtype": "TCP", 00:23:57.759 "adrfam": "IPv4", 00:23:57.759 "traddr": "10.0.0.2", 00:23:57.759 "trsvcid": "4421", 00:23:57.759 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:57.759 }, 00:23:57.759 "ctrlr_data": { 00:23:57.759 "cntlid": 3, 00:23:57.759 "vendor_id": "0x8086", 00:23:57.759 "model_number": "SPDK bdev Controller", 00:23:57.759 "serial_number": "00000000000000000000", 00:23:57.759 "firmware_revision": "24.05", 00:23:57.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:57.759 "oacs": { 00:23:57.759 "security": 0, 00:23:57.759 "format": 0, 00:23:57.759 "firmware": 0, 00:23:57.759 "ns_manage": 0 00:23:57.759 }, 00:23:57.759 "multi_ctrlr": true, 00:23:57.759 "ana_reporting": false 00:23:57.759 }, 00:23:57.759 "vs": { 00:23:57.759 "nvme_version": "1.3" 00:23:57.759 }, 00:23:57.759 "ns_data": { 00:23:57.759 "id": 1, 00:23:57.759 "can_share": true 00:23:57.759 } 00:23:57.759 } 00:23:57.759 ], 00:23:57.759 "mp_policy": "active_passive" 00:23:57.759 } 00:23:57.759 } 00:23:57.759 ] 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.tWgq35xU9D 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:57.759 rmmod nvme_tcp 00:23:57.759 rmmod nvme_fabrics 00:23:57.759 rmmod nvme_keyring 00:23:57.759 08:35:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:58.018 08:35:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:58.018 08:35:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:58.018 08:35:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 370368 ']' 00:23:58.018 08:35:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 370368 00:23:58.018 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 370368 ']' 00:23:58.018 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 370368 00:23:58.018 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:23:58.018 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:58.018 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 370368 00:23:58.018 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:58.018 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:58.018 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 370368' 00:23:58.018 killing process with pid 370368 00:23:58.018 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 370368 00:23:58.018 [2024-05-15 08:35:44.829560] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:58.018 [2024-05-15 08:35:44.829582] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:58.018 [2024-05-15 08:35:44.829589] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:58.018 08:35:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 370368 00:23:58.018 08:35:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:58.018 08:35:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:58.018 08:35:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:58.018 08:35:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:58.018 08:35:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:58.018 08:35:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.018 08:35:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.018 08:35:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.555 08:35:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:00.555 00:24:00.555 real 0m9.191s 00:24:00.555 user 0m3.297s 00:24:00.555 sys 0m4.324s 00:24:00.555 08:35:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:00.555 08:35:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:00.555 ************************************ 00:24:00.555 END TEST nvmf_async_init 00:24:00.555 ************************************ 00:24:00.555 08:35:47 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:00.555 08:35:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:00.555 08:35:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:00.555 08:35:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:00.555 ************************************ 00:24:00.555 START TEST dma 00:24:00.555 ************************************ 00:24:00.555 08:35:47 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:00.555 * Looking for test storage... 00:24:00.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:00.555 08:35:47 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.555 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:00.555 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.555 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.555 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.555 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.555 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.555 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.555 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.555 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.555 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.555 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.555 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:00.555 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:00.555 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.555 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.555 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:00.555 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.555 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.555 08:35:47 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.555 08:35:47 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.555 08:35:47 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.555 08:35:47 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.555 08:35:47 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.556 08:35:47 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.556 08:35:47 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:00.556 08:35:47 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.556 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:00.556 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:00.556 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:00.556 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.556 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.556 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.556 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:00.556 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:00.556 08:35:47 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:00.556 08:35:47 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:00.556 08:35:47 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:00.556 00:24:00.556 real 0m0.113s 00:24:00.556 user 0m0.052s 00:24:00.556 sys 0m0.069s 00:24:00.556 08:35:47 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:00.556 08:35:47 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:00.556 ************************************ 00:24:00.556 END TEST dma 00:24:00.556 ************************************ 00:24:00.556 08:35:47 nvmf_tcp -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:00.556 08:35:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:00.556 08:35:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:00.556 08:35:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:00.556 ************************************ 00:24:00.556 START TEST nvmf_identify 00:24:00.556 ************************************ 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:00.556 * Looking for test storage... 00:24:00.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:00.556 08:35:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:05.830 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:05.830 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:05.830 Found net devices under 0000:86:00.0: cvl_0_0 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.830 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:05.831 Found net devices under 0000:86:00.1: cvl_0_1 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.831 08:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:05.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:24:05.831 00:24:05.831 --- 10.0.0.2 ping statistics --- 00:24:05.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.831 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:24:05.831 00:24:05.831 --- 10.0.0.1 ping statistics --- 00:24:05.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.831 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=373958 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 373958 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 373958 ']' 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.831 08:35:52 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:05.831 [2024-05-15 08:35:52.207966] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:24:05.831 [2024-05-15 08:35:52.208010] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.831 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.831 [2024-05-15 08:35:52.267951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:05.831 [2024-05-15 08:35:52.348329] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.831 [2024-05-15 08:35:52.348363] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.831 [2024-05-15 08:35:52.348370] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.831 [2024-05-15 08:35:52.348376] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.831 [2024-05-15 08:35:52.348381] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.831 [2024-05-15 08:35:52.348449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.831 [2024-05-15 08:35:52.348467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.831 [2024-05-15 08:35:52.348564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.831 [2024-05-15 08:35:52.348565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:06.090 [2024-05-15 08:35:53.020923] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:06.090 Malloc0 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:06.090 [2024-05-15 08:35:53.108665] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:06.090 [2024-05-15 08:35:53.108881] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.090 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.352 08:35:53 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:06.352 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.352 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:06.352 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.352 08:35:53 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:06.352 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.352 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:06.352 [ 00:24:06.352 { 00:24:06.352 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:06.352 "subtype": "Discovery", 00:24:06.352 "listen_addresses": [ 00:24:06.352 { 00:24:06.352 "trtype": "TCP", 00:24:06.352 "adrfam": "IPv4", 00:24:06.352 "traddr": "10.0.0.2", 00:24:06.352 "trsvcid": "4420" 00:24:06.352 } 00:24:06.352 ], 00:24:06.352 "allow_any_host": true, 00:24:06.352 "hosts": [] 00:24:06.352 }, 00:24:06.352 { 00:24:06.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.352 "subtype": "NVMe", 00:24:06.352 "listen_addresses": [ 00:24:06.352 { 00:24:06.352 "trtype": "TCP", 00:24:06.352 "adrfam": "IPv4", 00:24:06.352 "traddr": "10.0.0.2", 00:24:06.352 "trsvcid": "4420" 00:24:06.352 } 00:24:06.352 ], 00:24:06.352 "allow_any_host": true, 00:24:06.352 "hosts": [], 00:24:06.352 "serial_number": "SPDK00000000000001", 00:24:06.352 "model_number": "SPDK bdev Controller", 00:24:06.352 "max_namespaces": 32, 00:24:06.352 "min_cntlid": 1, 00:24:06.352 "max_cntlid": 65519, 00:24:06.352 "namespaces": [ 00:24:06.352 { 00:24:06.352 "nsid": 1, 00:24:06.352 "bdev_name": "Malloc0", 00:24:06.352 "name": "Malloc0", 00:24:06.352 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:06.352 "eui64": "ABCDEF0123456789", 00:24:06.352 "uuid": "abee4f73-04b3-4305-9721-bf971d1345ad" 00:24:06.352 } 00:24:06.352 ] 00:24:06.352 } 00:24:06.352 ] 00:24:06.352 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.352 08:35:53 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:06.352 [2024-05-15 08:35:53.159627] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:24:06.352 [2024-05-15 08:35:53.159681] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid374203 ] 00:24:06.352 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.352 [2024-05-15 08:35:53.188707] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:06.352 [2024-05-15 08:35:53.188752] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:06.352 [2024-05-15 08:35:53.188757] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:06.352 [2024-05-15 08:35:53.188768] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:06.352 [2024-05-15 08:35:53.188775] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:06.352 [2024-05-15 08:35:53.189082] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:06.352 [2024-05-15 08:35:53.189108] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xafcc30 0 00:24:06.352 [2024-05-15 08:35:53.203171] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:06.352 [2024-05-15 08:35:53.203181] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:06.352 [2024-05-15 08:35:53.203188] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:06.352 [2024-05-15 08:35:53.203191] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:06.352 [2024-05-15 08:35:53.203225] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.352 [2024-05-15 08:35:53.203230] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.352 [2024-05-15 08:35:53.203234] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafcc30) 00:24:06.352 [2024-05-15 08:35:53.203246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:06.352 [2024-05-15 08:35:53.203262] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64980, cid 0, qid 0 00:24:06.352 [2024-05-15 08:35:53.211175] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.352 [2024-05-15 08:35:53.211182] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.352 [2024-05-15 08:35:53.211186] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.352 [2024-05-15 08:35:53.211190] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64980) on tqpair=0xafcc30 00:24:06.352 [2024-05-15 08:35:53.211201] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:06.352 [2024-05-15 08:35:53.211206] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:06.352 [2024-05-15 08:35:53.211211] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:06.352 [2024-05-15 08:35:53.211222] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.352 [2024-05-15 08:35:53.211225] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.352 [2024-05-15 08:35:53.211228] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafcc30) 00:24:06.352 [2024-05-15 08:35:53.211235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.352 [2024-05-15 08:35:53.211247] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64980, cid 0, qid 0 00:24:06.352 [2024-05-15 08:35:53.211412] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.352 [2024-05-15 08:35:53.211418] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.352 [2024-05-15 08:35:53.211421] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.352 [2024-05-15 08:35:53.211424] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64980) on tqpair=0xafcc30 00:24:06.352 [2024-05-15 08:35:53.211428] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:06.352 [2024-05-15 08:35:53.211435] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:06.352 [2024-05-15 08:35:53.211441] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.352 [2024-05-15 08:35:53.211445] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.352 [2024-05-15 08:35:53.211448] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafcc30) 00:24:06.353 [2024-05-15 08:35:53.211454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.353 [2024-05-15 08:35:53.211463] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64980, cid 0, qid 0 00:24:06.353 [2024-05-15 08:35:53.211524] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.353 [2024-05-15 08:35:53.211530] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.353 [2024-05-15 08:35:53.211533] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.211536] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64980) on tqpair=0xafcc30 00:24:06.353 [2024-05-15 08:35:53.211540] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:06.353 [2024-05-15 08:35:53.211547] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:06.353 [2024-05-15 08:35:53.211552] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.211556] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.211559] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafcc30) 00:24:06.353 [2024-05-15 08:35:53.211565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.353 [2024-05-15 08:35:53.211576] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64980, cid 0, qid 0 00:24:06.353 [2024-05-15 08:35:53.211637] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.353 [2024-05-15 08:35:53.211642] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.353 [2024-05-15 08:35:53.211646] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.211649] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64980) on tqpair=0xafcc30 00:24:06.353 [2024-05-15 08:35:53.211653] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:06.353 [2024-05-15 08:35:53.211660] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.211664] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.211667] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafcc30) 00:24:06.353 [2024-05-15 08:35:53.211673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.353 [2024-05-15 08:35:53.211682] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64980, cid 0, qid 0 00:24:06.353 [2024-05-15 08:35:53.211743] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.353 [2024-05-15 08:35:53.211748] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.353 [2024-05-15 08:35:53.211751] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.211755] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64980) on tqpair=0xafcc30 00:24:06.353 [2024-05-15 08:35:53.211759] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:06.353 [2024-05-15 08:35:53.211763] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:06.353 [2024-05-15 08:35:53.211769] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:06.353 [2024-05-15 08:35:53.211874] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:06.353 [2024-05-15 08:35:53.211878] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:06.353 [2024-05-15 08:35:53.211885] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.211888] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.211891] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafcc30) 00:24:06.353 [2024-05-15 08:35:53.211897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.353 [2024-05-15 08:35:53.211906] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64980, cid 0, qid 0 00:24:06.353 [2024-05-15 08:35:53.211991] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.353 [2024-05-15 08:35:53.211996] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.353 [2024-05-15 08:35:53.211999] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.212003] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64980) on tqpair=0xafcc30 00:24:06.353 [2024-05-15 08:35:53.212007] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:06.353 [2024-05-15 08:35:53.212015] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.212019] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.212022] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafcc30) 00:24:06.353 [2024-05-15 08:35:53.212028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.353 [2024-05-15 08:35:53.212039] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64980, cid 0, qid 0 00:24:06.353 [2024-05-15 08:35:53.212102] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.353 [2024-05-15 08:35:53.212107] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.353 [2024-05-15 08:35:53.212110] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.212113] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64980) on tqpair=0xafcc30 00:24:06.353 [2024-05-15 08:35:53.212118] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:06.353 [2024-05-15 08:35:53.212121] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:06.353 [2024-05-15 08:35:53.212128] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:06.353 [2024-05-15 08:35:53.212135] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:06.353 [2024-05-15 08:35:53.212142] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.212146] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafcc30) 00:24:06.353 [2024-05-15 08:35:53.212151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.353 [2024-05-15 08:35:53.212160] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64980, cid 0, qid 0 00:24:06.353 [2024-05-15 08:35:53.212258] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:06.353 [2024-05-15 08:35:53.212264] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:06.353 [2024-05-15 08:35:53.212267] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.212270] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafcc30): datao=0, datal=4096, cccid=0 00:24:06.353 [2024-05-15 08:35:53.212274] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb64980) on tqpair(0xafcc30): expected_datao=0, payload_size=4096 00:24:06.353 [2024-05-15 08:35:53.212278] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.212292] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.212297] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.258172] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.353 [2024-05-15 08:35:53.258184] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.353 [2024-05-15 08:35:53.258187] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.258191] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64980) on tqpair=0xafcc30 00:24:06.353 [2024-05-15 08:35:53.258198] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:06.353 [2024-05-15 08:35:53.258203] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:06.353 [2024-05-15 08:35:53.258207] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:06.353 [2024-05-15 08:35:53.258211] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:06.353 [2024-05-15 08:35:53.258216] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:06.353 [2024-05-15 08:35:53.258220] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:06.353 [2024-05-15 08:35:53.258231] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:06.353 [2024-05-15 08:35:53.258242] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.258246] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.258249] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafcc30) 00:24:06.353 [2024-05-15 08:35:53.258256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:06.353 [2024-05-15 08:35:53.258269] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64980, cid 0, qid 0 00:24:06.353 [2024-05-15 08:35:53.258423] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.353 [2024-05-15 08:35:53.258429] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.353 [2024-05-15 08:35:53.258432] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.258435] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64980) on tqpair=0xafcc30 00:24:06.353 [2024-05-15 08:35:53.258441] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.258444] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.258447] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafcc30) 00:24:06.353 [2024-05-15 08:35:53.258453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.353 [2024-05-15 08:35:53.258458] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.258461] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.258464] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xafcc30) 00:24:06.353 [2024-05-15 08:35:53.258469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.353 [2024-05-15 08:35:53.258474] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.258477] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.258480] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xafcc30) 00:24:06.353 [2024-05-15 08:35:53.258485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.353 [2024-05-15 08:35:53.258490] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.258493] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.353 [2024-05-15 08:35:53.258496] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafcc30) 00:24:06.354 [2024-05-15 08:35:53.258501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.354 [2024-05-15 08:35:53.258505] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:06.354 [2024-05-15 08:35:53.258515] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:06.354 [2024-05-15 08:35:53.258520] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.258523] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafcc30) 00:24:06.354 [2024-05-15 08:35:53.258529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.354 [2024-05-15 08:35:53.258540] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64980, cid 0, qid 0 00:24:06.354 [2024-05-15 08:35:53.258544] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64ae0, cid 1, qid 0 00:24:06.354 [2024-05-15 08:35:53.258548] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64c40, cid 2, qid 0 00:24:06.354 [2024-05-15 08:35:53.258554] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64da0, cid 3, qid 0 00:24:06.354 [2024-05-15 08:35:53.258558] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64f00, cid 4, qid 0 00:24:06.354 [2024-05-15 08:35:53.258661] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.354 [2024-05-15 08:35:53.258666] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.354 [2024-05-15 08:35:53.258669] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.258673] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64f00) on tqpair=0xafcc30 00:24:06.354 [2024-05-15 08:35:53.258677] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:06.354 [2024-05-15 08:35:53.258681] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:06.354 [2024-05-15 08:35:53.258690] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.258694] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafcc30) 00:24:06.354 [2024-05-15 08:35:53.258700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.354 [2024-05-15 08:35:53.258709] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64f00, cid 4, qid 0 00:24:06.354 [2024-05-15 08:35:53.258784] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:06.354 [2024-05-15 08:35:53.258790] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:06.354 [2024-05-15 08:35:53.258793] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.258796] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafcc30): datao=0, datal=4096, cccid=4 00:24:06.354 [2024-05-15 08:35:53.258800] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb64f00) on tqpair(0xafcc30): expected_datao=0, payload_size=4096 00:24:06.354 [2024-05-15 08:35:53.258804] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.258810] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.258813] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.258826] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.354 [2024-05-15 08:35:53.258831] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.354 [2024-05-15 08:35:53.258834] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.258837] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64f00) on tqpair=0xafcc30 00:24:06.354 [2024-05-15 08:35:53.258848] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:06.354 [2024-05-15 08:35:53.258870] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.258874] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafcc30) 00:24:06.354 [2024-05-15 08:35:53.258879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.354 [2024-05-15 08:35:53.258885] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.258888] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.258891] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xafcc30) 00:24:06.354 [2024-05-15 08:35:53.258896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.354 [2024-05-15 08:35:53.258909] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64f00, cid 4, qid 0 00:24:06.354 [2024-05-15 08:35:53.258913] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb65060, cid 5, qid 0 00:24:06.354 [2024-05-15 08:35:53.259010] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:06.354 [2024-05-15 08:35:53.259016] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:06.354 [2024-05-15 08:35:53.259019] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.259022] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafcc30): datao=0, datal=1024, cccid=4 00:24:06.354 [2024-05-15 08:35:53.259026] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb64f00) on tqpair(0xafcc30): expected_datao=0, payload_size=1024 00:24:06.354 [2024-05-15 08:35:53.259029] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.259035] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.259038] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.259043] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.354 [2024-05-15 08:35:53.259047] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.354 [2024-05-15 08:35:53.259050] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.259054] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb65060) on tqpair=0xafcc30 00:24:06.354 [2024-05-15 08:35:53.299304] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.354 [2024-05-15 08:35:53.299315] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.354 [2024-05-15 08:35:53.299318] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.299322] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64f00) on tqpair=0xafcc30 00:24:06.354 [2024-05-15 08:35:53.299333] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.299337] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafcc30) 00:24:06.354 [2024-05-15 08:35:53.299343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.354 [2024-05-15 08:35:53.299358] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64f00, cid 4, qid 0 00:24:06.354 [2024-05-15 08:35:53.299435] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:06.354 [2024-05-15 08:35:53.299440] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:06.354 [2024-05-15 08:35:53.299443] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.299446] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafcc30): datao=0, datal=3072, cccid=4 00:24:06.354 [2024-05-15 08:35:53.299450] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb64f00) on tqpair(0xafcc30): expected_datao=0, payload_size=3072 00:24:06.354 [2024-05-15 08:35:53.299454] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.299460] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.299463] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.299474] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.354 [2024-05-15 08:35:53.299479] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.354 [2024-05-15 08:35:53.299482] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.299485] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64f00) on tqpair=0xafcc30 00:24:06.354 [2024-05-15 08:35:53.299492] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.299496] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafcc30) 00:24:06.354 [2024-05-15 08:35:53.299502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.354 [2024-05-15 08:35:53.299514] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64f00, cid 4, qid 0 00:24:06.354 [2024-05-15 08:35:53.299585] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:06.354 [2024-05-15 08:35:53.299593] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:06.354 [2024-05-15 08:35:53.299596] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.299599] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafcc30): datao=0, datal=8, cccid=4 00:24:06.354 [2024-05-15 08:35:53.299603] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb64f00) on tqpair(0xafcc30): expected_datao=0, payload_size=8 00:24:06.354 [2024-05-15 08:35:53.299607] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.299612] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.299615] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.341302] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.354 [2024-05-15 08:35:53.341312] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.354 [2024-05-15 08:35:53.341315] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.354 [2024-05-15 08:35:53.341318] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64f00) on tqpair=0xafcc30 00:24:06.354 ===================================================== 00:24:06.354 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:06.354 ===================================================== 00:24:06.354 Controller Capabilities/Features 00:24:06.354 ================================ 00:24:06.354 Vendor ID: 0000 00:24:06.354 Subsystem Vendor ID: 0000 00:24:06.354 Serial Number: .................... 00:24:06.354 Model Number: ........................................ 00:24:06.354 Firmware Version: 24.05 00:24:06.354 Recommended Arb Burst: 0 00:24:06.354 IEEE OUI Identifier: 00 00 00 00:24:06.354 Multi-path I/O 00:24:06.354 May have multiple subsystem ports: No 00:24:06.354 May have multiple controllers: No 00:24:06.354 Associated with SR-IOV VF: No 00:24:06.354 Max Data Transfer Size: 131072 00:24:06.354 Max Number of Namespaces: 0 00:24:06.354 Max Number of I/O Queues: 1024 00:24:06.354 NVMe Specification Version (VS): 1.3 00:24:06.354 NVMe Specification Version (Identify): 1.3 00:24:06.354 Maximum Queue Entries: 128 00:24:06.354 Contiguous Queues Required: Yes 00:24:06.354 Arbitration Mechanisms Supported 00:24:06.354 Weighted Round Robin: Not Supported 00:24:06.354 Vendor Specific: Not Supported 00:24:06.354 Reset Timeout: 15000 ms 00:24:06.354 Doorbell Stride: 4 bytes 00:24:06.354 NVM Subsystem Reset: Not Supported 00:24:06.354 Command Sets Supported 00:24:06.355 NVM Command Set: Supported 00:24:06.355 Boot Partition: Not Supported 00:24:06.355 Memory Page Size Minimum: 4096 bytes 00:24:06.355 Memory Page Size Maximum: 4096 bytes 00:24:06.355 Persistent Memory Region: Not Supported 00:24:06.355 Optional Asynchronous Events Supported 00:24:06.355 Namespace Attribute Notices: Not Supported 00:24:06.355 Firmware Activation Notices: Not Supported 00:24:06.355 ANA Change Notices: Not Supported 00:24:06.355 PLE Aggregate Log Change Notices: Not Supported 00:24:06.355 LBA Status Info Alert Notices: Not Supported 00:24:06.355 EGE Aggregate Log Change Notices: Not Supported 00:24:06.355 Normal NVM Subsystem Shutdown event: Not Supported 00:24:06.355 Zone Descriptor Change Notices: Not Supported 00:24:06.355 Discovery Log Change Notices: Supported 00:24:06.355 Controller Attributes 00:24:06.355 128-bit Host Identifier: Not Supported 00:24:06.355 Non-Operational Permissive Mode: Not Supported 00:24:06.355 NVM Sets: Not Supported 00:24:06.355 Read Recovery Levels: Not Supported 00:24:06.355 Endurance Groups: Not Supported 00:24:06.355 Predictable Latency Mode: Not Supported 00:24:06.355 Traffic Based Keep ALive: Not Supported 00:24:06.355 Namespace Granularity: Not Supported 00:24:06.355 SQ Associations: Not Supported 00:24:06.355 UUID List: Not Supported 00:24:06.355 Multi-Domain Subsystem: Not Supported 00:24:06.355 Fixed Capacity Management: Not Supported 00:24:06.355 Variable Capacity Management: Not Supported 00:24:06.355 Delete Endurance Group: Not Supported 00:24:06.355 Delete NVM Set: Not Supported 00:24:06.355 Extended LBA Formats Supported: Not Supported 00:24:06.355 Flexible Data Placement Supported: Not Supported 00:24:06.355 00:24:06.355 Controller Memory Buffer Support 00:24:06.355 ================================ 00:24:06.355 Supported: No 00:24:06.355 00:24:06.355 Persistent Memory Region Support 00:24:06.355 ================================ 00:24:06.355 Supported: No 00:24:06.355 00:24:06.355 Admin Command Set Attributes 00:24:06.355 ============================ 00:24:06.355 Security Send/Receive: Not Supported 00:24:06.355 Format NVM: Not Supported 00:24:06.355 Firmware Activate/Download: Not Supported 00:24:06.355 Namespace Management: Not Supported 00:24:06.355 Device Self-Test: Not Supported 00:24:06.355 Directives: Not Supported 00:24:06.355 NVMe-MI: Not Supported 00:24:06.355 Virtualization Management: Not Supported 00:24:06.355 Doorbell Buffer Config: Not Supported 00:24:06.355 Get LBA Status Capability: Not Supported 00:24:06.355 Command & Feature Lockdown Capability: Not Supported 00:24:06.355 Abort Command Limit: 1 00:24:06.355 Async Event Request Limit: 4 00:24:06.355 Number of Firmware Slots: N/A 00:24:06.355 Firmware Slot 1 Read-Only: N/A 00:24:06.355 Firmware Activation Without Reset: N/A 00:24:06.355 Multiple Update Detection Support: N/A 00:24:06.355 Firmware Update Granularity: No Information Provided 00:24:06.355 Per-Namespace SMART Log: No 00:24:06.355 Asymmetric Namespace Access Log Page: Not Supported 00:24:06.355 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:06.355 Command Effects Log Page: Not Supported 00:24:06.355 Get Log Page Extended Data: Supported 00:24:06.355 Telemetry Log Pages: Not Supported 00:24:06.355 Persistent Event Log Pages: Not Supported 00:24:06.355 Supported Log Pages Log Page: May Support 00:24:06.355 Commands Supported & Effects Log Page: Not Supported 00:24:06.355 Feature Identifiers & Effects Log Page:May Support 00:24:06.355 NVMe-MI Commands & Effects Log Page: May Support 00:24:06.355 Data Area 4 for Telemetry Log: Not Supported 00:24:06.355 Error Log Page Entries Supported: 128 00:24:06.355 Keep Alive: Not Supported 00:24:06.355 00:24:06.355 NVM Command Set Attributes 00:24:06.355 ========================== 00:24:06.355 Submission Queue Entry Size 00:24:06.355 Max: 1 00:24:06.355 Min: 1 00:24:06.355 Completion Queue Entry Size 00:24:06.355 Max: 1 00:24:06.355 Min: 1 00:24:06.355 Number of Namespaces: 0 00:24:06.355 Compare Command: Not Supported 00:24:06.355 Write Uncorrectable Command: Not Supported 00:24:06.355 Dataset Management Command: Not Supported 00:24:06.355 Write Zeroes Command: Not Supported 00:24:06.355 Set Features Save Field: Not Supported 00:24:06.355 Reservations: Not Supported 00:24:06.355 Timestamp: Not Supported 00:24:06.355 Copy: Not Supported 00:24:06.355 Volatile Write Cache: Not Present 00:24:06.355 Atomic Write Unit (Normal): 1 00:24:06.355 Atomic Write Unit (PFail): 1 00:24:06.355 Atomic Compare & Write Unit: 1 00:24:06.355 Fused Compare & Write: Supported 00:24:06.355 Scatter-Gather List 00:24:06.355 SGL Command Set: Supported 00:24:06.355 SGL Keyed: Supported 00:24:06.355 SGL Bit Bucket Descriptor: Not Supported 00:24:06.355 SGL Metadata Pointer: Not Supported 00:24:06.355 Oversized SGL: Not Supported 00:24:06.355 SGL Metadata Address: Not Supported 00:24:06.355 SGL Offset: Supported 00:24:06.355 Transport SGL Data Block: Not Supported 00:24:06.355 Replay Protected Memory Block: Not Supported 00:24:06.355 00:24:06.355 Firmware Slot Information 00:24:06.355 ========================= 00:24:06.355 Active slot: 0 00:24:06.355 00:24:06.355 00:24:06.355 Error Log 00:24:06.355 ========= 00:24:06.355 00:24:06.355 Active Namespaces 00:24:06.355 ================= 00:24:06.355 Discovery Log Page 00:24:06.355 ================== 00:24:06.355 Generation Counter: 2 00:24:06.355 Number of Records: 2 00:24:06.355 Record Format: 0 00:24:06.355 00:24:06.355 Discovery Log Entry 0 00:24:06.355 ---------------------- 00:24:06.355 Transport Type: 3 (TCP) 00:24:06.355 Address Family: 1 (IPv4) 00:24:06.355 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:06.355 Entry Flags: 00:24:06.355 Duplicate Returned Information: 1 00:24:06.355 Explicit Persistent Connection Support for Discovery: 1 00:24:06.355 Transport Requirements: 00:24:06.355 Secure Channel: Not Required 00:24:06.355 Port ID: 0 (0x0000) 00:24:06.355 Controller ID: 65535 (0xffff) 00:24:06.355 Admin Max SQ Size: 128 00:24:06.355 Transport Service Identifier: 4420 00:24:06.355 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:06.355 Transport Address: 10.0.0.2 00:24:06.355 Discovery Log Entry 1 00:24:06.355 ---------------------- 00:24:06.355 Transport Type: 3 (TCP) 00:24:06.355 Address Family: 1 (IPv4) 00:24:06.355 Subsystem Type: 2 (NVM Subsystem) 00:24:06.355 Entry Flags: 00:24:06.355 Duplicate Returned Information: 0 00:24:06.355 Explicit Persistent Connection Support for Discovery: 0 00:24:06.355 Transport Requirements: 00:24:06.355 Secure Channel: Not Required 00:24:06.355 Port ID: 0 (0x0000) 00:24:06.355 Controller ID: 65535 (0xffff) 00:24:06.355 Admin Max SQ Size: 128 00:24:06.355 Transport Service Identifier: 4420 00:24:06.355 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:06.355 Transport Address: 10.0.0.2 [2024-05-15 08:35:53.341484] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:06.355 [2024-05-15 08:35:53.341497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.355 [2024-05-15 08:35:53.341504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.355 [2024-05-15 08:35:53.341509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.355 [2024-05-15 08:35:53.341514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.355 [2024-05-15 08:35:53.341521] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.355 [2024-05-15 08:35:53.341525] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.355 [2024-05-15 08:35:53.341528] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafcc30) 00:24:06.355 [2024-05-15 08:35:53.341535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.355 [2024-05-15 08:35:53.341548] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64da0, cid 3, qid 0 00:24:06.355 [2024-05-15 08:35:53.341610] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.355 [2024-05-15 08:35:53.341616] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.355 [2024-05-15 08:35:53.341619] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.355 [2024-05-15 08:35:53.341623] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64da0) on tqpair=0xafcc30 00:24:06.355 [2024-05-15 08:35:53.341629] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.355 [2024-05-15 08:35:53.341632] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.355 [2024-05-15 08:35:53.341635] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafcc30) 00:24:06.355 [2024-05-15 08:35:53.341641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.355 [2024-05-15 08:35:53.341653] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64da0, cid 3, qid 0 00:24:06.355 [2024-05-15 08:35:53.341725] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.355 [2024-05-15 08:35:53.341731] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.355 [2024-05-15 08:35:53.341734] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.355 [2024-05-15 08:35:53.341737] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64da0) on tqpair=0xafcc30 00:24:06.356 [2024-05-15 08:35:53.341741] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:06.356 [2024-05-15 08:35:53.341747] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:06.356 [2024-05-15 08:35:53.341755] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.356 [2024-05-15 08:35:53.341759] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.356 [2024-05-15 08:35:53.341762] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafcc30) 00:24:06.356 [2024-05-15 08:35:53.341767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.356 [2024-05-15 08:35:53.341776] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64da0, cid 3, qid 0 00:24:06.356 [2024-05-15 08:35:53.341841] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.356 [2024-05-15 08:35:53.341846] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.356 [2024-05-15 08:35:53.341849] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.356 [2024-05-15 08:35:53.341852] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64da0) on tqpair=0xafcc30 00:24:06.356 [2024-05-15 08:35:53.341861] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.356 [2024-05-15 08:35:53.341865] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.356 [2024-05-15 08:35:53.341868] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafcc30) 00:24:06.356 [2024-05-15 08:35:53.341873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.356 [2024-05-15 08:35:53.341882] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64da0, cid 3, qid 0 00:24:06.356 [2024-05-15 08:35:53.341959] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.356 [2024-05-15 08:35:53.341964] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.356 [2024-05-15 08:35:53.341967] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.356 [2024-05-15 08:35:53.341970] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64da0) on tqpair=0xafcc30 00:24:06.356 [2024-05-15 08:35:53.341979] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.356 [2024-05-15 08:35:53.341982] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.356 [2024-05-15 08:35:53.341985] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafcc30) 00:24:06.356 [2024-05-15 08:35:53.341991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.356 [2024-05-15 08:35:53.342000] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64da0, cid 3, qid 0 00:24:06.356 [2024-05-15 08:35:53.342076] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.356 [2024-05-15 08:35:53.342082] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.356 [2024-05-15 08:35:53.342085] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.356 [2024-05-15 08:35:53.342088] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64da0) on tqpair=0xafcc30 00:24:06.356 [2024-05-15 08:35:53.342096] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.356 [2024-05-15 08:35:53.342100] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.356 [2024-05-15 08:35:53.342103] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafcc30) 00:24:06.356 [2024-05-15 08:35:53.342108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.356 [2024-05-15 08:35:53.342117] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64da0, cid 3, qid 0 00:24:06.356 [2024-05-15 08:35:53.346172] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.356 [2024-05-15 08:35:53.346179] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.356 [2024-05-15 08:35:53.346182] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.356 [2024-05-15 08:35:53.346185] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64da0) on tqpair=0xafcc30 00:24:06.356 [2024-05-15 08:35:53.346196] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.356 [2024-05-15 08:35:53.346200] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.356 [2024-05-15 08:35:53.346203] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafcc30) 00:24:06.356 [2024-05-15 08:35:53.346208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.356 [2024-05-15 08:35:53.346219] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb64da0, cid 3, qid 0 00:24:06.356 [2024-05-15 08:35:53.346369] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.356 [2024-05-15 08:35:53.346375] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.356 [2024-05-15 08:35:53.346378] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.356 [2024-05-15 08:35:53.346381] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb64da0) on tqpair=0xafcc30 00:24:06.356 [2024-05-15 08:35:53.346387] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:24:06.356 00:24:06.356 08:35:53 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:06.620 [2024-05-15 08:35:53.381395] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:24:06.620 [2024-05-15 08:35:53.381440] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid374205 ] 00:24:06.620 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.620 [2024-05-15 08:35:53.411387] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:06.620 [2024-05-15 08:35:53.411429] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:06.620 [2024-05-15 08:35:53.411434] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:06.620 [2024-05-15 08:35:53.411443] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:06.620 [2024-05-15 08:35:53.411450] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:06.620 [2024-05-15 08:35:53.411664] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:06.620 [2024-05-15 08:35:53.411684] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1490c30 0 00:24:06.620 [2024-05-15 08:35:53.418174] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:06.620 [2024-05-15 08:35:53.418184] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:06.620 [2024-05-15 08:35:53.418190] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:06.620 [2024-05-15 08:35:53.418194] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:06.620 [2024-05-15 08:35:53.418223] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.620 [2024-05-15 08:35:53.418228] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.620 [2024-05-15 08:35:53.418233] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1490c30) 00:24:06.620 [2024-05-15 08:35:53.418244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:06.620 [2024-05-15 08:35:53.418258] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8980, cid 0, qid 0 00:24:06.620 [2024-05-15 08:35:53.426178] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.620 [2024-05-15 08:35:53.426190] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.620 [2024-05-15 08:35:53.426193] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.620 [2024-05-15 08:35:53.426197] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8980) on tqpair=0x1490c30 00:24:06.620 [2024-05-15 08:35:53.426208] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:06.620 [2024-05-15 08:35:53.426214] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:06.620 [2024-05-15 08:35:53.426219] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:06.620 [2024-05-15 08:35:53.426227] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.426231] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.426234] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1490c30) 00:24:06.621 [2024-05-15 08:35:53.426241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.621 [2024-05-15 08:35:53.426253] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8980, cid 0, qid 0 00:24:06.621 [2024-05-15 08:35:53.426340] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.621 [2024-05-15 08:35:53.426345] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.621 [2024-05-15 08:35:53.426348] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.426352] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8980) on tqpair=0x1490c30 00:24:06.621 [2024-05-15 08:35:53.426357] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:06.621 [2024-05-15 08:35:53.426363] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:06.621 [2024-05-15 08:35:53.426370] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.426373] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.426376] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1490c30) 00:24:06.621 [2024-05-15 08:35:53.426381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.621 [2024-05-15 08:35:53.426391] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8980, cid 0, qid 0 00:24:06.621 [2024-05-15 08:35:53.426457] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.621 [2024-05-15 08:35:53.426463] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.621 [2024-05-15 08:35:53.426466] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.426469] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8980) on tqpair=0x1490c30 00:24:06.621 [2024-05-15 08:35:53.426474] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:06.621 [2024-05-15 08:35:53.426481] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:06.621 [2024-05-15 08:35:53.426486] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.426490] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.426493] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1490c30) 00:24:06.621 [2024-05-15 08:35:53.426498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.621 [2024-05-15 08:35:53.426508] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8980, cid 0, qid 0 00:24:06.621 [2024-05-15 08:35:53.426569] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.621 [2024-05-15 08:35:53.426575] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.621 [2024-05-15 08:35:53.426580] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.426583] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8980) on tqpair=0x1490c30 00:24:06.621 [2024-05-15 08:35:53.426588] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:06.621 [2024-05-15 08:35:53.426596] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.426600] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.426603] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1490c30) 00:24:06.621 [2024-05-15 08:35:53.426608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.621 [2024-05-15 08:35:53.426618] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8980, cid 0, qid 0 00:24:06.621 [2024-05-15 08:35:53.426681] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.621 [2024-05-15 08:35:53.426686] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.621 [2024-05-15 08:35:53.426689] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.426692] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8980) on tqpair=0x1490c30 00:24:06.621 [2024-05-15 08:35:53.426697] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:06.621 [2024-05-15 08:35:53.426701] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:06.621 [2024-05-15 08:35:53.426707] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:06.621 [2024-05-15 08:35:53.426812] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:06.621 [2024-05-15 08:35:53.426815] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:06.621 [2024-05-15 08:35:53.426822] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.426825] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.426828] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1490c30) 00:24:06.621 [2024-05-15 08:35:53.426834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.621 [2024-05-15 08:35:53.426843] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8980, cid 0, qid 0 00:24:06.621 [2024-05-15 08:35:53.426904] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.621 [2024-05-15 08:35:53.426909] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.621 [2024-05-15 08:35:53.426912] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.426915] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8980) on tqpair=0x1490c30 00:24:06.621 [2024-05-15 08:35:53.426920] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:06.621 [2024-05-15 08:35:53.426928] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.426932] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.426935] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1490c30) 00:24:06.621 [2024-05-15 08:35:53.426940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.621 [2024-05-15 08:35:53.426949] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8980, cid 0, qid 0 00:24:06.621 [2024-05-15 08:35:53.427014] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.621 [2024-05-15 08:35:53.427022] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.621 [2024-05-15 08:35:53.427025] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.427028] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8980) on tqpair=0x1490c30 00:24:06.621 [2024-05-15 08:35:53.427032] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:06.621 [2024-05-15 08:35:53.427036] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:06.621 [2024-05-15 08:35:53.427042] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:06.621 [2024-05-15 08:35:53.427049] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:06.621 [2024-05-15 08:35:53.427056] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.427059] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1490c30) 00:24:06.621 [2024-05-15 08:35:53.427065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.621 [2024-05-15 08:35:53.427075] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8980, cid 0, qid 0 00:24:06.621 [2024-05-15 08:35:53.427175] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:06.621 [2024-05-15 08:35:53.427181] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:06.621 [2024-05-15 08:35:53.427184] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.427187] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1490c30): datao=0, datal=4096, cccid=0 00:24:06.621 [2024-05-15 08:35:53.427191] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14f8980) on tqpair(0x1490c30): expected_datao=0, payload_size=4096 00:24:06.621 [2024-05-15 08:35:53.427195] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.427208] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.427212] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.468305] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.621 [2024-05-15 08:35:53.468315] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.621 [2024-05-15 08:35:53.468318] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.468321] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8980) on tqpair=0x1490c30 00:24:06.621 [2024-05-15 08:35:53.468329] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:06.621 [2024-05-15 08:35:53.468333] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:06.621 [2024-05-15 08:35:53.468337] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:06.621 [2024-05-15 08:35:53.468340] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:06.621 [2024-05-15 08:35:53.468344] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:06.621 [2024-05-15 08:35:53.468348] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:06.621 [2024-05-15 08:35:53.468360] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:06.621 [2024-05-15 08:35:53.468367] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.468371] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.468374] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1490c30) 00:24:06.621 [2024-05-15 08:35:53.468383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:06.621 [2024-05-15 08:35:53.468394] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8980, cid 0, qid 0 00:24:06.621 [2024-05-15 08:35:53.468454] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.621 [2024-05-15 08:35:53.468460] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.621 [2024-05-15 08:35:53.468462] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.468466] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8980) on tqpair=0x1490c30 00:24:06.621 [2024-05-15 08:35:53.468472] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.621 [2024-05-15 08:35:53.468476] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.468478] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1490c30) 00:24:06.622 [2024-05-15 08:35:53.468484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.622 [2024-05-15 08:35:53.468489] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.468492] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.468495] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1490c30) 00:24:06.622 [2024-05-15 08:35:53.468500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.622 [2024-05-15 08:35:53.468505] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.468508] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.468511] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1490c30) 00:24:06.622 [2024-05-15 08:35:53.468516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.622 [2024-05-15 08:35:53.468521] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.468524] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.468527] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.622 [2024-05-15 08:35:53.468532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.622 [2024-05-15 08:35:53.468536] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:06.622 [2024-05-15 08:35:53.468546] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:06.622 [2024-05-15 08:35:53.468551] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.468554] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1490c30) 00:24:06.622 [2024-05-15 08:35:53.468560] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.622 [2024-05-15 08:35:53.468571] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8980, cid 0, qid 0 00:24:06.622 [2024-05-15 08:35:53.468576] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8ae0, cid 1, qid 0 00:24:06.622 [2024-05-15 08:35:53.468580] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8c40, cid 2, qid 0 00:24:06.622 [2024-05-15 08:35:53.468584] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.622 [2024-05-15 08:35:53.468588] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8f00, cid 4, qid 0 00:24:06.622 [2024-05-15 08:35:53.468686] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.622 [2024-05-15 08:35:53.468694] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.622 [2024-05-15 08:35:53.468697] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.468700] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8f00) on tqpair=0x1490c30 00:24:06.622 [2024-05-15 08:35:53.468705] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:06.622 [2024-05-15 08:35:53.468709] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:06.622 [2024-05-15 08:35:53.468716] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:06.622 [2024-05-15 08:35:53.468723] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:06.622 [2024-05-15 08:35:53.468729] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.468732] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.468735] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1490c30) 00:24:06.622 [2024-05-15 08:35:53.468741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:06.622 [2024-05-15 08:35:53.468751] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8f00, cid 4, qid 0 00:24:06.622 [2024-05-15 08:35:53.468814] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.622 [2024-05-15 08:35:53.468820] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.622 [2024-05-15 08:35:53.468823] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.468826] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8f00) on tqpair=0x1490c30 00:24:06.622 [2024-05-15 08:35:53.468870] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:06.622 [2024-05-15 08:35:53.468879] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:06.622 [2024-05-15 08:35:53.468886] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.468889] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1490c30) 00:24:06.622 [2024-05-15 08:35:53.468895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.622 [2024-05-15 08:35:53.468905] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8f00, cid 4, qid 0 00:24:06.622 [2024-05-15 08:35:53.468981] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:06.622 [2024-05-15 08:35:53.468987] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:06.622 [2024-05-15 08:35:53.468990] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.468993] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1490c30): datao=0, datal=4096, cccid=4 00:24:06.622 [2024-05-15 08:35:53.468997] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14f8f00) on tqpair(0x1490c30): expected_datao=0, payload_size=4096 00:24:06.622 [2024-05-15 08:35:53.469001] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.469006] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.469010] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.469019] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.622 [2024-05-15 08:35:53.469024] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.622 [2024-05-15 08:35:53.469027] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.469030] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8f00) on tqpair=0x1490c30 00:24:06.622 [2024-05-15 08:35:53.469042] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:06.622 [2024-05-15 08:35:53.469051] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:06.622 [2024-05-15 08:35:53.469059] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:06.622 [2024-05-15 08:35:53.469065] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.469069] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1490c30) 00:24:06.622 [2024-05-15 08:35:53.469074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.622 [2024-05-15 08:35:53.469085] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8f00, cid 4, qid 0 00:24:06.622 [2024-05-15 08:35:53.469174] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:06.622 [2024-05-15 08:35:53.469180] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:06.622 [2024-05-15 08:35:53.469183] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.469186] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1490c30): datao=0, datal=4096, cccid=4 00:24:06.622 [2024-05-15 08:35:53.469190] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14f8f00) on tqpair(0x1490c30): expected_datao=0, payload_size=4096 00:24:06.622 [2024-05-15 08:35:53.469194] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.469199] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.469203] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.469231] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.622 [2024-05-15 08:35:53.469237] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.622 [2024-05-15 08:35:53.469240] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.469243] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8f00) on tqpair=0x1490c30 00:24:06.622 [2024-05-15 08:35:53.469251] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:06.622 [2024-05-15 08:35:53.469260] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:06.622 [2024-05-15 08:35:53.469266] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.469270] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1490c30) 00:24:06.622 [2024-05-15 08:35:53.469275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.622 [2024-05-15 08:35:53.469286] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8f00, cid 4, qid 0 00:24:06.622 [2024-05-15 08:35:53.469357] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:06.622 [2024-05-15 08:35:53.469362] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:06.622 [2024-05-15 08:35:53.469365] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.469368] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1490c30): datao=0, datal=4096, cccid=4 00:24:06.622 [2024-05-15 08:35:53.469372] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14f8f00) on tqpair(0x1490c30): expected_datao=0, payload_size=4096 00:24:06.622 [2024-05-15 08:35:53.469376] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.469381] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.469384] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.469398] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.622 [2024-05-15 08:35:53.469404] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.622 [2024-05-15 08:35:53.469407] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.622 [2024-05-15 08:35:53.469410] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8f00) on tqpair=0x1490c30 00:24:06.622 [2024-05-15 08:35:53.469419] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:06.622 [2024-05-15 08:35:53.469427] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:06.622 [2024-05-15 08:35:53.469433] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:06.622 [2024-05-15 08:35:53.469439] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:06.622 [2024-05-15 08:35:53.469443] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:06.623 [2024-05-15 08:35:53.469447] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:06.623 [2024-05-15 08:35:53.469451] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:06.623 [2024-05-15 08:35:53.469456] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:06.623 [2024-05-15 08:35:53.469470] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.469474] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1490c30) 00:24:06.623 [2024-05-15 08:35:53.469479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.623 [2024-05-15 08:35:53.469485] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.469488] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.469491] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1490c30) 00:24:06.623 [2024-05-15 08:35:53.469497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.623 [2024-05-15 08:35:53.469509] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8f00, cid 4, qid 0 00:24:06.623 [2024-05-15 08:35:53.469513] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f9060, cid 5, qid 0 00:24:06.623 [2024-05-15 08:35:53.469594] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.623 [2024-05-15 08:35:53.469599] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.623 [2024-05-15 08:35:53.469602] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.469606] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8f00) on tqpair=0x1490c30 00:24:06.623 [2024-05-15 08:35:53.469612] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.623 [2024-05-15 08:35:53.469617] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.623 [2024-05-15 08:35:53.469620] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.469623] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f9060) on tqpair=0x1490c30 00:24:06.623 [2024-05-15 08:35:53.469632] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.469635] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1490c30) 00:24:06.623 [2024-05-15 08:35:53.469640] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.623 [2024-05-15 08:35:53.469649] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f9060, cid 5, qid 0 00:24:06.623 [2024-05-15 08:35:53.469719] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.623 [2024-05-15 08:35:53.469725] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.623 [2024-05-15 08:35:53.469728] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.469731] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f9060) on tqpair=0x1490c30 00:24:06.623 [2024-05-15 08:35:53.469739] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.469743] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1490c30) 00:24:06.623 [2024-05-15 08:35:53.469748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.623 [2024-05-15 08:35:53.469757] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f9060, cid 5, qid 0 00:24:06.623 [2024-05-15 08:35:53.469821] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.623 [2024-05-15 08:35:53.469826] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.623 [2024-05-15 08:35:53.469829] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.469832] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f9060) on tqpair=0x1490c30 00:24:06.623 [2024-05-15 08:35:53.469840] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.469844] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1490c30) 00:24:06.623 [2024-05-15 08:35:53.469849] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.623 [2024-05-15 08:35:53.469858] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f9060, cid 5, qid 0 00:24:06.623 [2024-05-15 08:35:53.469929] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.623 [2024-05-15 08:35:53.469935] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.623 [2024-05-15 08:35:53.469938] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.469941] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f9060) on tqpair=0x1490c30 00:24:06.623 [2024-05-15 08:35:53.469952] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.469955] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1490c30) 00:24:06.623 [2024-05-15 08:35:53.469961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.623 [2024-05-15 08:35:53.469967] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.469970] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1490c30) 00:24:06.623 [2024-05-15 08:35:53.469975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.623 [2024-05-15 08:35:53.469982] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.469985] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1490c30) 00:24:06.623 [2024-05-15 08:35:53.469990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.623 [2024-05-15 08:35:53.469998] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.470001] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1490c30) 00:24:06.623 [2024-05-15 08:35:53.470007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.623 [2024-05-15 08:35:53.470017] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f9060, cid 5, qid 0 00:24:06.623 [2024-05-15 08:35:53.470023] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8f00, cid 4, qid 0 00:24:06.623 [2024-05-15 08:35:53.470027] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f91c0, cid 6, qid 0 00:24:06.623 [2024-05-15 08:35:53.470031] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f9320, cid 7, qid 0 00:24:06.623 [2024-05-15 08:35:53.474168] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:06.623 [2024-05-15 08:35:53.474176] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:06.623 [2024-05-15 08:35:53.474179] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474182] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1490c30): datao=0, datal=8192, cccid=5 00:24:06.623 [2024-05-15 08:35:53.474186] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14f9060) on tqpair(0x1490c30): expected_datao=0, payload_size=8192 00:24:06.623 [2024-05-15 08:35:53.474189] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474201] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474205] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474213] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:06.623 [2024-05-15 08:35:53.474217] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:06.623 [2024-05-15 08:35:53.474220] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474224] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1490c30): datao=0, datal=512, cccid=4 00:24:06.623 [2024-05-15 08:35:53.474228] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14f8f00) on tqpair(0x1490c30): expected_datao=0, payload_size=512 00:24:06.623 [2024-05-15 08:35:53.474231] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474237] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474239] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474244] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:06.623 [2024-05-15 08:35:53.474249] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:06.623 [2024-05-15 08:35:53.474252] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474255] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1490c30): datao=0, datal=512, cccid=6 00:24:06.623 [2024-05-15 08:35:53.474266] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14f91c0) on tqpair(0x1490c30): expected_datao=0, payload_size=512 00:24:06.623 [2024-05-15 08:35:53.474269] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474275] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474277] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474282] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:06.623 [2024-05-15 08:35:53.474287] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:06.623 [2024-05-15 08:35:53.474290] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474293] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1490c30): datao=0, datal=4096, cccid=7 00:24:06.623 [2024-05-15 08:35:53.474297] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14f9320) on tqpair(0x1490c30): expected_datao=0, payload_size=4096 00:24:06.623 [2024-05-15 08:35:53.474301] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474306] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474309] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474314] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.623 [2024-05-15 08:35:53.474319] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.623 [2024-05-15 08:35:53.474321] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474327] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f9060) on tqpair=0x1490c30 00:24:06.623 [2024-05-15 08:35:53.474338] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.623 [2024-05-15 08:35:53.474343] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.623 [2024-05-15 08:35:53.474347] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474350] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8f00) on tqpair=0x1490c30 00:24:06.623 [2024-05-15 08:35:53.474357] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.623 [2024-05-15 08:35:53.474362] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.623 [2024-05-15 08:35:53.474365] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.623 [2024-05-15 08:35:53.474369] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f91c0) on tqpair=0x1490c30 00:24:06.623 [2024-05-15 08:35:53.474376] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.623 [2024-05-15 08:35:53.474381] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.623 [2024-05-15 08:35:53.474384] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.624 [2024-05-15 08:35:53.474388] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f9320) on tqpair=0x1490c30 00:24:06.624 ===================================================== 00:24:06.624 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:06.624 ===================================================== 00:24:06.624 Controller Capabilities/Features 00:24:06.624 ================================ 00:24:06.624 Vendor ID: 8086 00:24:06.624 Subsystem Vendor ID: 8086 00:24:06.624 Serial Number: SPDK00000000000001 00:24:06.624 Model Number: SPDK bdev Controller 00:24:06.624 Firmware Version: 24.05 00:24:06.624 Recommended Arb Burst: 6 00:24:06.624 IEEE OUI Identifier: e4 d2 5c 00:24:06.624 Multi-path I/O 00:24:06.624 May have multiple subsystem ports: Yes 00:24:06.624 May have multiple controllers: Yes 00:24:06.624 Associated with SR-IOV VF: No 00:24:06.624 Max Data Transfer Size: 131072 00:24:06.624 Max Number of Namespaces: 32 00:24:06.624 Max Number of I/O Queues: 127 00:24:06.624 NVMe Specification Version (VS): 1.3 00:24:06.624 NVMe Specification Version (Identify): 1.3 00:24:06.624 Maximum Queue Entries: 128 00:24:06.624 Contiguous Queues Required: Yes 00:24:06.624 Arbitration Mechanisms Supported 00:24:06.624 Weighted Round Robin: Not Supported 00:24:06.624 Vendor Specific: Not Supported 00:24:06.624 Reset Timeout: 15000 ms 00:24:06.624 Doorbell Stride: 4 bytes 00:24:06.624 NVM Subsystem Reset: Not Supported 00:24:06.624 Command Sets Supported 00:24:06.624 NVM Command Set: Supported 00:24:06.624 Boot Partition: Not Supported 00:24:06.624 Memory Page Size Minimum: 4096 bytes 00:24:06.624 Memory Page Size Maximum: 4096 bytes 00:24:06.624 Persistent Memory Region: Not Supported 00:24:06.624 Optional Asynchronous Events Supported 00:24:06.624 Namespace Attribute Notices: Supported 00:24:06.624 Firmware Activation Notices: Not Supported 00:24:06.624 ANA Change Notices: Not Supported 00:24:06.624 PLE Aggregate Log Change Notices: Not Supported 00:24:06.624 LBA Status Info Alert Notices: Not Supported 00:24:06.624 EGE Aggregate Log Change Notices: Not Supported 00:24:06.624 Normal NVM Subsystem Shutdown event: Not Supported 00:24:06.624 Zone Descriptor Change Notices: Not Supported 00:24:06.624 Discovery Log Change Notices: Not Supported 00:24:06.624 Controller Attributes 00:24:06.624 128-bit Host Identifier: Supported 00:24:06.624 Non-Operational Permissive Mode: Not Supported 00:24:06.624 NVM Sets: Not Supported 00:24:06.624 Read Recovery Levels: Not Supported 00:24:06.624 Endurance Groups: Not Supported 00:24:06.624 Predictable Latency Mode: Not Supported 00:24:06.624 Traffic Based Keep ALive: Not Supported 00:24:06.624 Namespace Granularity: Not Supported 00:24:06.624 SQ Associations: Not Supported 00:24:06.624 UUID List: Not Supported 00:24:06.624 Multi-Domain Subsystem: Not Supported 00:24:06.624 Fixed Capacity Management: Not Supported 00:24:06.624 Variable Capacity Management: Not Supported 00:24:06.624 Delete Endurance Group: Not Supported 00:24:06.624 Delete NVM Set: Not Supported 00:24:06.624 Extended LBA Formats Supported: Not Supported 00:24:06.624 Flexible Data Placement Supported: Not Supported 00:24:06.624 00:24:06.624 Controller Memory Buffer Support 00:24:06.624 ================================ 00:24:06.624 Supported: No 00:24:06.624 00:24:06.624 Persistent Memory Region Support 00:24:06.624 ================================ 00:24:06.624 Supported: No 00:24:06.624 00:24:06.624 Admin Command Set Attributes 00:24:06.624 ============================ 00:24:06.624 Security Send/Receive: Not Supported 00:24:06.624 Format NVM: Not Supported 00:24:06.624 Firmware Activate/Download: Not Supported 00:24:06.624 Namespace Management: Not Supported 00:24:06.624 Device Self-Test: Not Supported 00:24:06.624 Directives: Not Supported 00:24:06.624 NVMe-MI: Not Supported 00:24:06.624 Virtualization Management: Not Supported 00:24:06.624 Doorbell Buffer Config: Not Supported 00:24:06.624 Get LBA Status Capability: Not Supported 00:24:06.624 Command & Feature Lockdown Capability: Not Supported 00:24:06.624 Abort Command Limit: 4 00:24:06.624 Async Event Request Limit: 4 00:24:06.624 Number of Firmware Slots: N/A 00:24:06.624 Firmware Slot 1 Read-Only: N/A 00:24:06.624 Firmware Activation Without Reset: N/A 00:24:06.624 Multiple Update Detection Support: N/A 00:24:06.624 Firmware Update Granularity: No Information Provided 00:24:06.624 Per-Namespace SMART Log: No 00:24:06.624 Asymmetric Namespace Access Log Page: Not Supported 00:24:06.624 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:06.624 Command Effects Log Page: Supported 00:24:06.624 Get Log Page Extended Data: Supported 00:24:06.624 Telemetry Log Pages: Not Supported 00:24:06.624 Persistent Event Log Pages: Not Supported 00:24:06.624 Supported Log Pages Log Page: May Support 00:24:06.624 Commands Supported & Effects Log Page: Not Supported 00:24:06.624 Feature Identifiers & Effects Log Page:May Support 00:24:06.624 NVMe-MI Commands & Effects Log Page: May Support 00:24:06.624 Data Area 4 for Telemetry Log: Not Supported 00:24:06.624 Error Log Page Entries Supported: 128 00:24:06.624 Keep Alive: Supported 00:24:06.624 Keep Alive Granularity: 10000 ms 00:24:06.624 00:24:06.624 NVM Command Set Attributes 00:24:06.624 ========================== 00:24:06.624 Submission Queue Entry Size 00:24:06.624 Max: 64 00:24:06.624 Min: 64 00:24:06.624 Completion Queue Entry Size 00:24:06.624 Max: 16 00:24:06.624 Min: 16 00:24:06.624 Number of Namespaces: 32 00:24:06.624 Compare Command: Supported 00:24:06.624 Write Uncorrectable Command: Not Supported 00:24:06.624 Dataset Management Command: Supported 00:24:06.624 Write Zeroes Command: Supported 00:24:06.624 Set Features Save Field: Not Supported 00:24:06.624 Reservations: Supported 00:24:06.624 Timestamp: Not Supported 00:24:06.624 Copy: Supported 00:24:06.624 Volatile Write Cache: Present 00:24:06.624 Atomic Write Unit (Normal): 1 00:24:06.624 Atomic Write Unit (PFail): 1 00:24:06.624 Atomic Compare & Write Unit: 1 00:24:06.624 Fused Compare & Write: Supported 00:24:06.624 Scatter-Gather List 00:24:06.624 SGL Command Set: Supported 00:24:06.624 SGL Keyed: Supported 00:24:06.624 SGL Bit Bucket Descriptor: Not Supported 00:24:06.624 SGL Metadata Pointer: Not Supported 00:24:06.624 Oversized SGL: Not Supported 00:24:06.624 SGL Metadata Address: Not Supported 00:24:06.624 SGL Offset: Supported 00:24:06.624 Transport SGL Data Block: Not Supported 00:24:06.624 Replay Protected Memory Block: Not Supported 00:24:06.624 00:24:06.624 Firmware Slot Information 00:24:06.624 ========================= 00:24:06.624 Active slot: 1 00:24:06.624 Slot 1 Firmware Revision: 24.05 00:24:06.624 00:24:06.624 00:24:06.624 Commands Supported and Effects 00:24:06.624 ============================== 00:24:06.624 Admin Commands 00:24:06.624 -------------- 00:24:06.624 Get Log Page (02h): Supported 00:24:06.624 Identify (06h): Supported 00:24:06.624 Abort (08h): Supported 00:24:06.624 Set Features (09h): Supported 00:24:06.624 Get Features (0Ah): Supported 00:24:06.624 Asynchronous Event Request (0Ch): Supported 00:24:06.624 Keep Alive (18h): Supported 00:24:06.624 I/O Commands 00:24:06.624 ------------ 00:24:06.624 Flush (00h): Supported LBA-Change 00:24:06.624 Write (01h): Supported LBA-Change 00:24:06.624 Read (02h): Supported 00:24:06.624 Compare (05h): Supported 00:24:06.624 Write Zeroes (08h): Supported LBA-Change 00:24:06.624 Dataset Management (09h): Supported LBA-Change 00:24:06.624 Copy (19h): Supported LBA-Change 00:24:06.624 Unknown (79h): Supported LBA-Change 00:24:06.624 Unknown (7Ah): Supported 00:24:06.624 00:24:06.624 Error Log 00:24:06.624 ========= 00:24:06.624 00:24:06.624 Arbitration 00:24:06.624 =========== 00:24:06.624 Arbitration Burst: 1 00:24:06.624 00:24:06.624 Power Management 00:24:06.624 ================ 00:24:06.624 Number of Power States: 1 00:24:06.624 Current Power State: Power State #0 00:24:06.624 Power State #0: 00:24:06.624 Max Power: 0.00 W 00:24:06.624 Non-Operational State: Operational 00:24:06.624 Entry Latency: Not Reported 00:24:06.624 Exit Latency: Not Reported 00:24:06.624 Relative Read Throughput: 0 00:24:06.624 Relative Read Latency: 0 00:24:06.624 Relative Write Throughput: 0 00:24:06.624 Relative Write Latency: 0 00:24:06.624 Idle Power: Not Reported 00:24:06.624 Active Power: Not Reported 00:24:06.624 Non-Operational Permissive Mode: Not Supported 00:24:06.624 00:24:06.624 Health Information 00:24:06.624 ================== 00:24:06.624 Critical Warnings: 00:24:06.624 Available Spare Space: OK 00:24:06.624 Temperature: OK 00:24:06.624 Device Reliability: OK 00:24:06.624 Read Only: No 00:24:06.624 Volatile Memory Backup: OK 00:24:06.624 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:06.624 Temperature Threshold: [2024-05-15 08:35:53.474473] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.624 [2024-05-15 08:35:53.474477] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1490c30) 00:24:06.624 [2024-05-15 08:35:53.474483] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.625 [2024-05-15 08:35:53.474496] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f9320, cid 7, qid 0 00:24:06.625 [2024-05-15 08:35:53.474657] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.625 [2024-05-15 08:35:53.474663] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.625 [2024-05-15 08:35:53.474666] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.474669] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f9320) on tqpair=0x1490c30 00:24:06.625 [2024-05-15 08:35:53.474695] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:06.625 [2024-05-15 08:35:53.474706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.625 [2024-05-15 08:35:53.474711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.625 [2024-05-15 08:35:53.474717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.625 [2024-05-15 08:35:53.474722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.625 [2024-05-15 08:35:53.474728] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.474732] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.474735] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.625 [2024-05-15 08:35:53.474741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.625 [2024-05-15 08:35:53.474752] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.625 [2024-05-15 08:35:53.474818] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.625 [2024-05-15 08:35:53.474824] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.625 [2024-05-15 08:35:53.474827] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.474830] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.625 [2024-05-15 08:35:53.474838] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.474841] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.474844] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.625 [2024-05-15 08:35:53.474850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.625 [2024-05-15 08:35:53.474862] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.625 [2024-05-15 08:35:53.474939] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.625 [2024-05-15 08:35:53.474944] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.625 [2024-05-15 08:35:53.474947] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.474950] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.625 [2024-05-15 08:35:53.474955] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:06.625 [2024-05-15 08:35:53.474958] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:06.625 [2024-05-15 08:35:53.474966] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.474970] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.474973] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.625 [2024-05-15 08:35:53.474978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.625 [2024-05-15 08:35:53.474987] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.625 [2024-05-15 08:35:53.475057] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.625 [2024-05-15 08:35:53.475063] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.625 [2024-05-15 08:35:53.475066] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.475069] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.625 [2024-05-15 08:35:53.475077] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.475081] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.475084] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.625 [2024-05-15 08:35:53.475090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.625 [2024-05-15 08:35:53.475098] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.625 [2024-05-15 08:35:53.475173] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.625 [2024-05-15 08:35:53.475179] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.625 [2024-05-15 08:35:53.475182] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.475185] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.625 [2024-05-15 08:35:53.475194] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.475197] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.475200] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.625 [2024-05-15 08:35:53.475206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.625 [2024-05-15 08:35:53.475216] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.625 [2024-05-15 08:35:53.475280] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.625 [2024-05-15 08:35:53.475286] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.625 [2024-05-15 08:35:53.475289] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.475294] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.625 [2024-05-15 08:35:53.475302] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.475306] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.475309] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.625 [2024-05-15 08:35:53.475315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.625 [2024-05-15 08:35:53.475324] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.625 [2024-05-15 08:35:53.475390] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.625 [2024-05-15 08:35:53.475396] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.625 [2024-05-15 08:35:53.475399] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.475402] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.625 [2024-05-15 08:35:53.475411] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.475414] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.475417] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.625 [2024-05-15 08:35:53.475423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.625 [2024-05-15 08:35:53.475432] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.625 [2024-05-15 08:35:53.475502] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.625 [2024-05-15 08:35:53.475507] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.625 [2024-05-15 08:35:53.475510] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.475513] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.625 [2024-05-15 08:35:53.475522] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.475526] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.475529] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.625 [2024-05-15 08:35:53.475534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.625 [2024-05-15 08:35:53.475543] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.625 [2024-05-15 08:35:53.475612] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.625 [2024-05-15 08:35:53.475618] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.625 [2024-05-15 08:35:53.475621] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.475624] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.625 [2024-05-15 08:35:53.475632] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.475636] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.625 [2024-05-15 08:35:53.475639] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.625 [2024-05-15 08:35:53.475645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.626 [2024-05-15 08:35:53.475653] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.626 [2024-05-15 08:35:53.475722] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.626 [2024-05-15 08:35:53.475727] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.626 [2024-05-15 08:35:53.475730] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.475734] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.626 [2024-05-15 08:35:53.475744] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.475747] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.475750] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.626 [2024-05-15 08:35:53.475756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.626 [2024-05-15 08:35:53.475765] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.626 [2024-05-15 08:35:53.475829] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.626 [2024-05-15 08:35:53.475834] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.626 [2024-05-15 08:35:53.475837] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.475841] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.626 [2024-05-15 08:35:53.475849] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.475853] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.475856] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.626 [2024-05-15 08:35:53.475861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.626 [2024-05-15 08:35:53.475870] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.626 [2024-05-15 08:35:53.475935] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.626 [2024-05-15 08:35:53.475941] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.626 [2024-05-15 08:35:53.475944] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.475947] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.626 [2024-05-15 08:35:53.475956] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.475959] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.475962] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.626 [2024-05-15 08:35:53.475968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.626 [2024-05-15 08:35:53.475977] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.626 [2024-05-15 08:35:53.476038] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.626 [2024-05-15 08:35:53.476043] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.626 [2024-05-15 08:35:53.476046] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476049] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.626 [2024-05-15 08:35:53.476058] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476061] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476064] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.626 [2024-05-15 08:35:53.476070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.626 [2024-05-15 08:35:53.476079] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.626 [2024-05-15 08:35:53.476146] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.626 [2024-05-15 08:35:53.476151] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.626 [2024-05-15 08:35:53.476154] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476158] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.626 [2024-05-15 08:35:53.476171] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476174] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476178] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.626 [2024-05-15 08:35:53.476183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.626 [2024-05-15 08:35:53.476192] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.626 [2024-05-15 08:35:53.476257] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.626 [2024-05-15 08:35:53.476262] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.626 [2024-05-15 08:35:53.476265] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476268] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.626 [2024-05-15 08:35:53.476277] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476280] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476283] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.626 [2024-05-15 08:35:53.476289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.626 [2024-05-15 08:35:53.476298] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.626 [2024-05-15 08:35:53.476367] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.626 [2024-05-15 08:35:53.476372] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.626 [2024-05-15 08:35:53.476375] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476379] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.626 [2024-05-15 08:35:53.476387] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476390] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476394] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.626 [2024-05-15 08:35:53.476399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.626 [2024-05-15 08:35:53.476408] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.626 [2024-05-15 08:35:53.476478] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.626 [2024-05-15 08:35:53.476483] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.626 [2024-05-15 08:35:53.476486] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476490] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.626 [2024-05-15 08:35:53.476498] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476502] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476505] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.626 [2024-05-15 08:35:53.476510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.626 [2024-05-15 08:35:53.476519] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.626 [2024-05-15 08:35:53.476586] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.626 [2024-05-15 08:35:53.476591] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.626 [2024-05-15 08:35:53.476594] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476597] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.626 [2024-05-15 08:35:53.476606] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476611] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476614] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.626 [2024-05-15 08:35:53.476620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.626 [2024-05-15 08:35:53.476629] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.626 [2024-05-15 08:35:53.476696] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.626 [2024-05-15 08:35:53.476701] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.626 [2024-05-15 08:35:53.476704] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476707] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.626 [2024-05-15 08:35:53.476716] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476719] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476722] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.626 [2024-05-15 08:35:53.476728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.626 [2024-05-15 08:35:53.476737] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.626 [2024-05-15 08:35:53.476799] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.626 [2024-05-15 08:35:53.476804] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.626 [2024-05-15 08:35:53.476807] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476810] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.626 [2024-05-15 08:35:53.476819] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476823] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476826] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.626 [2024-05-15 08:35:53.476831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.626 [2024-05-15 08:35:53.476840] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.626 [2024-05-15 08:35:53.476900] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.626 [2024-05-15 08:35:53.476906] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.626 [2024-05-15 08:35:53.476909] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476912] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.626 [2024-05-15 08:35:53.476921] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476924] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.626 [2024-05-15 08:35:53.476927] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.626 [2024-05-15 08:35:53.476933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.626 [2024-05-15 08:35:53.476942] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.626 [2024-05-15 08:35:53.477011] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.627 [2024-05-15 08:35:53.477017] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.627 [2024-05-15 08:35:53.477019] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477023] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.627 [2024-05-15 08:35:53.477031] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477035] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477040] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.627 [2024-05-15 08:35:53.477045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.627 [2024-05-15 08:35:53.477054] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.627 [2024-05-15 08:35:53.477122] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.627 [2024-05-15 08:35:53.477127] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.627 [2024-05-15 08:35:53.477130] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477133] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.627 [2024-05-15 08:35:53.477142] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477145] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477148] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.627 [2024-05-15 08:35:53.477154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.627 [2024-05-15 08:35:53.477163] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.627 [2024-05-15 08:35:53.477233] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.627 [2024-05-15 08:35:53.477238] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.627 [2024-05-15 08:35:53.477241] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477244] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.627 [2024-05-15 08:35:53.477254] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477258] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477261] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.627 [2024-05-15 08:35:53.477266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.627 [2024-05-15 08:35:53.477276] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.627 [2024-05-15 08:35:53.477341] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.627 [2024-05-15 08:35:53.477347] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.627 [2024-05-15 08:35:53.477349] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477352] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.627 [2024-05-15 08:35:53.477361] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477364] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477367] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.627 [2024-05-15 08:35:53.477373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.627 [2024-05-15 08:35:53.477382] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.627 [2024-05-15 08:35:53.477451] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.627 [2024-05-15 08:35:53.477457] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.627 [2024-05-15 08:35:53.477460] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477463] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.627 [2024-05-15 08:35:53.477471] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477475] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477478] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.627 [2024-05-15 08:35:53.477485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.627 [2024-05-15 08:35:53.477494] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.627 [2024-05-15 08:35:53.477559] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.627 [2024-05-15 08:35:53.477565] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.627 [2024-05-15 08:35:53.477568] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477571] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.627 [2024-05-15 08:35:53.477580] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477583] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477586] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.627 [2024-05-15 08:35:53.477592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.627 [2024-05-15 08:35:53.477601] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.627 [2024-05-15 08:35:53.477665] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.627 [2024-05-15 08:35:53.477671] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.627 [2024-05-15 08:35:53.477674] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477677] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.627 [2024-05-15 08:35:53.477685] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477688] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477691] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.627 [2024-05-15 08:35:53.477697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.627 [2024-05-15 08:35:53.477706] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.627 [2024-05-15 08:35:53.477765] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.627 [2024-05-15 08:35:53.477770] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.627 [2024-05-15 08:35:53.477773] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477776] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.627 [2024-05-15 08:35:53.477785] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477789] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477792] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.627 [2024-05-15 08:35:53.477797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.627 [2024-05-15 08:35:53.477806] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.627 [2024-05-15 08:35:53.477875] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.627 [2024-05-15 08:35:53.477881] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.627 [2024-05-15 08:35:53.477884] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477887] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.627 [2024-05-15 08:35:53.477895] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477899] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477902] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.627 [2024-05-15 08:35:53.477909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.627 [2024-05-15 08:35:53.477918] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.627 [2024-05-15 08:35:53.477985] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.627 [2024-05-15 08:35:53.477991] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.627 [2024-05-15 08:35:53.477994] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.477997] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.627 [2024-05-15 08:35:53.478005] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.478009] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.478012] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.627 [2024-05-15 08:35:53.478018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.627 [2024-05-15 08:35:53.478027] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.627 [2024-05-15 08:35:53.478093] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.627 [2024-05-15 08:35:53.478099] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.627 [2024-05-15 08:35:53.478102] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.478105] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.627 [2024-05-15 08:35:53.478113] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.478117] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.478120] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.627 [2024-05-15 08:35:53.478126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.627 [2024-05-15 08:35:53.478135] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.627 [2024-05-15 08:35:53.482172] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.627 [2024-05-15 08:35:53.482179] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.627 [2024-05-15 08:35:53.482182] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.482185] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.627 [2024-05-15 08:35:53.482195] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.482199] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:06.627 [2024-05-15 08:35:53.482202] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1490c30) 00:24:06.627 [2024-05-15 08:35:53.482208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.627 [2024-05-15 08:35:53.482219] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14f8da0, cid 3, qid 0 00:24:06.627 [2024-05-15 08:35:53.482361] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.627 [2024-05-15 08:35:53.482367] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.627 [2024-05-15 08:35:53.482369] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.628 [2024-05-15 08:35:53.482373] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14f8da0) on tqpair=0x1490c30 00:24:06.628 [2024-05-15 08:35:53.482380] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:24:06.628 0 Kelvin (-273 Celsius) 00:24:06.628 Available Spare: 0% 00:24:06.628 Available Spare Threshold: 0% 00:24:06.628 Life Percentage Used: 0% 00:24:06.628 Data Units Read: 0 00:24:06.628 Data Units Written: 0 00:24:06.628 Host Read Commands: 0 00:24:06.628 Host Write Commands: 0 00:24:06.628 Controller Busy Time: 0 minutes 00:24:06.628 Power Cycles: 0 00:24:06.628 Power On Hours: 0 hours 00:24:06.628 Unsafe Shutdowns: 0 00:24:06.628 Unrecoverable Media Errors: 0 00:24:06.628 Lifetime Error Log Entries: 0 00:24:06.628 Warning Temperature Time: 0 minutes 00:24:06.628 Critical Temperature Time: 0 minutes 00:24:06.628 00:24:06.628 Number of Queues 00:24:06.628 ================ 00:24:06.628 Number of I/O Submission Queues: 127 00:24:06.628 Number of I/O Completion Queues: 127 00:24:06.628 00:24:06.628 Active Namespaces 00:24:06.628 ================= 00:24:06.628 Namespace ID:1 00:24:06.628 Error Recovery Timeout: Unlimited 00:24:06.628 Command Set Identifier: NVM (00h) 00:24:06.628 Deallocate: Supported 00:24:06.628 Deallocated/Unwritten Error: Not Supported 00:24:06.628 Deallocated Read Value: Unknown 00:24:06.628 Deallocate in Write Zeroes: Not Supported 00:24:06.628 Deallocated Guard Field: 0xFFFF 00:24:06.628 Flush: Supported 00:24:06.628 Reservation: Supported 00:24:06.628 Namespace Sharing Capabilities: Multiple Controllers 00:24:06.628 Size (in LBAs): 131072 (0GiB) 00:24:06.628 Capacity (in LBAs): 131072 (0GiB) 00:24:06.628 Utilization (in LBAs): 131072 (0GiB) 00:24:06.628 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:06.628 EUI64: ABCDEF0123456789 00:24:06.628 UUID: abee4f73-04b3-4305-9721-bf971d1345ad 00:24:06.628 Thin Provisioning: Not Supported 00:24:06.628 Per-NS Atomic Units: Yes 00:24:06.628 Atomic Boundary Size (Normal): 0 00:24:06.628 Atomic Boundary Size (PFail): 0 00:24:06.628 Atomic Boundary Offset: 0 00:24:06.628 Maximum Single Source Range Length: 65535 00:24:06.628 Maximum Copy Length: 65535 00:24:06.628 Maximum Source Range Count: 1 00:24:06.628 NGUID/EUI64 Never Reused: No 00:24:06.628 Namespace Write Protected: No 00:24:06.628 Number of LBA Formats: 1 00:24:06.628 Current LBA Format: LBA Format #00 00:24:06.628 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:06.628 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:06.628 rmmod nvme_tcp 00:24:06.628 rmmod nvme_fabrics 00:24:06.628 rmmod nvme_keyring 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 373958 ']' 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 373958 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 373958 ']' 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 373958 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 373958 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 373958' 00:24:06.628 killing process with pid 373958 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 373958 00:24:06.628 [2024-05-15 08:35:53.606871] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:06.628 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 373958 00:24:06.886 08:35:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:06.886 08:35:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:06.886 08:35:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:06.886 08:35:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:06.886 08:35:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:06.886 08:35:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.886 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.886 08:35:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.424 08:35:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:09.424 00:24:09.424 real 0m8.567s 00:24:09.424 user 0m7.082s 00:24:09.424 sys 0m3.915s 00:24:09.424 08:35:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:09.424 08:35:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.424 ************************************ 00:24:09.424 END TEST nvmf_identify 00:24:09.424 ************************************ 00:24:09.424 08:35:55 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:09.424 08:35:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:09.424 08:35:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:09.424 08:35:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:09.424 ************************************ 00:24:09.424 START TEST nvmf_perf 00:24:09.424 ************************************ 00:24:09.424 08:35:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:09.424 * Looking for test storage... 00:24:09.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:09.424 08:35:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.696 08:36:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:14.696 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:14.696 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:14.696 Found net devices under 0000:86:00.0: cvl_0_0 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:14.696 Found net devices under 0000:86:00.1: cvl_0_1 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:14.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:24:14.696 00:24:14.696 --- 10.0.0.2 ping statistics --- 00:24:14.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.696 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:24:14.696 00:24:14.696 --- 10.0.0.1 ping statistics --- 00:24:14.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.696 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:14.696 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=377813 00:24:14.697 08:36:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 377813 00:24:14.697 08:36:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 377813 ']' 00:24:14.697 08:36:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.697 08:36:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:14.697 08:36:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.697 08:36:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:14.697 08:36:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.697 [2024-05-15 08:36:01.312987] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:24:14.697 [2024-05-15 08:36:01.313027] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.697 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.697 [2024-05-15 08:36:01.370002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:14.697 [2024-05-15 08:36:01.455422] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.697 [2024-05-15 08:36:01.455456] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.697 [2024-05-15 08:36:01.455463] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.697 [2024-05-15 08:36:01.455469] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.697 [2024-05-15 08:36:01.455474] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.697 [2024-05-15 08:36:01.455515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.697 [2024-05-15 08:36:01.455611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.697 [2024-05-15 08:36:01.455694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:14.697 [2024-05-15 08:36:01.455695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.264 08:36:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:15.264 08:36:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:24:15.264 08:36:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:15.264 08:36:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:15.264 08:36:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:15.264 08:36:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.264 08:36:02 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:15.265 08:36:02 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:18.555 08:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:18.555 08:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:18.555 08:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:24:18.555 08:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:18.814 08:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:18.814 08:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:24:18.814 08:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:18.814 08:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:18.814 08:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:18.814 [2024-05-15 08:36:05.732928] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.814 08:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:19.074 08:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:19.074 08:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:19.333 08:36:06 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:19.333 08:36:06 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:19.333 08:36:06 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:19.592 [2024-05-15 08:36:06.493033] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:19.592 [2024-05-15 08:36:06.493294] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.592 08:36:06 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:19.852 08:36:06 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:24:19.852 08:36:06 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:19.852 08:36:06 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:19.852 08:36:06 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:21.230 Initializing NVMe Controllers 00:24:21.230 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:24:21.230 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:24:21.230 Initialization complete. Launching workers. 00:24:21.230 ======================================================== 00:24:21.230 Latency(us) 00:24:21.230 Device Information : IOPS MiB/s Average min max 00:24:21.230 PCIE (0000:5e:00.0) NSID 1 from core 0: 97814.20 382.09 326.67 36.51 6241.34 00:24:21.230 ======================================================== 00:24:21.230 Total : 97814.20 382.09 326.67 36.51 6241.34 00:24:21.230 00:24:21.230 08:36:07 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:21.230 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.168 Initializing NVMe Controllers 00:24:22.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:22.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:22.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:22.168 Initialization complete. Launching workers. 00:24:22.168 ======================================================== 00:24:22.168 Latency(us) 00:24:22.168 Device Information : IOPS MiB/s Average min max 00:24:22.168 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 88.77 0.35 11443.64 107.54 45585.62 00:24:22.168 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.83 0.26 15081.87 4994.66 47898.05 00:24:22.168 ======================================================== 00:24:22.168 Total : 155.60 0.61 13006.21 107.54 47898.05 00:24:22.168 00:24:22.168 08:36:09 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:22.168 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.548 Initializing NVMe Controllers 00:24:23.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:23.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:23.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:23.548 Initialization complete. Launching workers. 00:24:23.548 ======================================================== 00:24:23.548 Latency(us) 00:24:23.548 Device Information : IOPS MiB/s Average min max 00:24:23.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10882.15 42.51 2941.00 420.99 8981.34 00:24:23.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3770.71 14.73 8508.60 7136.81 16124.12 00:24:23.548 ======================================================== 00:24:23.548 Total : 14652.86 57.24 4373.74 420.99 16124.12 00:24:23.548 00:24:23.548 08:36:10 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:23.548 08:36:10 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:23.548 08:36:10 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.548 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.085 Initializing NVMe Controllers 00:24:26.085 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:26.085 Controller IO queue size 128, less than required. 00:24:26.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:26.085 Controller IO queue size 128, less than required. 00:24:26.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:26.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:26.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:26.085 Initialization complete. Launching workers. 00:24:26.085 ======================================================== 00:24:26.085 Latency(us) 00:24:26.085 Device Information : IOPS MiB/s Average min max 00:24:26.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1964.26 491.07 66080.79 43020.39 109608.26 00:24:26.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 603.93 150.98 222029.70 77718.38 333092.82 00:24:26.085 ======================================================== 00:24:26.085 Total : 2568.19 642.05 102753.22 43020.39 333092.82 00:24:26.085 00:24:26.085 08:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:26.085 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.652 No valid NVMe controllers or AIO or URING devices found 00:24:26.652 Initializing NVMe Controllers 00:24:26.652 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:26.652 Controller IO queue size 128, less than required. 00:24:26.652 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:26.652 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:26.652 Controller IO queue size 128, less than required. 00:24:26.652 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:26.652 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:26.652 WARNING: Some requested NVMe devices were skipped 00:24:26.652 08:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:26.652 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.190 Initializing NVMe Controllers 00:24:29.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:29.190 Controller IO queue size 128, less than required. 00:24:29.190 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:29.190 Controller IO queue size 128, less than required. 00:24:29.190 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:29.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:29.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:29.190 Initialization complete. Launching workers. 00:24:29.190 00:24:29.190 ==================== 00:24:29.190 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:29.190 TCP transport: 00:24:29.190 polls: 13255 00:24:29.190 idle_polls: 9027 00:24:29.190 sock_completions: 4228 00:24:29.190 nvme_completions: 6977 00:24:29.190 submitted_requests: 10432 00:24:29.190 queued_requests: 1 00:24:29.190 00:24:29.190 ==================== 00:24:29.190 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:29.190 TCP transport: 00:24:29.190 polls: 19283 00:24:29.190 idle_polls: 14145 00:24:29.190 sock_completions: 5138 00:24:29.190 nvme_completions: 7225 00:24:29.190 submitted_requests: 10940 00:24:29.190 queued_requests: 1 00:24:29.190 ======================================================== 00:24:29.190 Latency(us) 00:24:29.190 Device Information : IOPS MiB/s Average min max 00:24:29.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1740.45 435.11 74208.61 45082.14 122922.18 00:24:29.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1802.33 450.58 72482.76 31853.05 112459.63 00:24:29.190 ======================================================== 00:24:29.190 Total : 3542.78 885.70 73330.61 31853.05 122922.18 00:24:29.190 00:24:29.190 08:36:15 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:29.190 08:36:15 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.190 08:36:16 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:29.190 08:36:16 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:29.190 08:36:16 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:29.190 08:36:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:29.190 08:36:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:29.190 08:36:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:29.190 08:36:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:29.190 08:36:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:29.190 08:36:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:29.190 rmmod nvme_tcp 00:24:29.190 rmmod nvme_fabrics 00:24:29.190 rmmod nvme_keyring 00:24:29.190 08:36:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:29.190 08:36:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:29.191 08:36:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:29.191 08:36:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 377813 ']' 00:24:29.191 08:36:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 377813 00:24:29.191 08:36:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 377813 ']' 00:24:29.191 08:36:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 377813 00:24:29.191 08:36:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:24:29.191 08:36:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:29.191 08:36:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 377813 00:24:29.450 08:36:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:29.450 08:36:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:29.450 08:36:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 377813' 00:24:29.450 killing process with pid 377813 00:24:29.450 08:36:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 377813 00:24:29.450 [2024-05-15 08:36:16.223216] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:29.450 08:36:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 377813 00:24:30.830 08:36:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:30.830 08:36:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:30.830 08:36:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:30.830 08:36:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:30.830 08:36:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:30.830 08:36:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.830 08:36:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.830 08:36:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.369 08:36:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:33.369 00:24:33.369 real 0m23.826s 00:24:33.369 user 1m4.728s 00:24:33.369 sys 0m7.274s 00:24:33.369 08:36:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:33.369 08:36:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:33.369 ************************************ 00:24:33.369 END TEST nvmf_perf 00:24:33.369 ************************************ 00:24:33.369 08:36:19 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:33.369 08:36:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:33.369 08:36:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:33.369 08:36:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:33.369 ************************************ 00:24:33.369 START TEST nvmf_fio_host 00:24:33.369 ************************************ 00:24:33.369 08:36:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:33.369 * Looking for test storage... 00:24:33.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:33.369 08:36:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:33.369 08:36:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.369 08:36:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.369 08:36:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.369 08:36:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.369 08:36:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.369 08:36:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.369 08:36:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:33.369 08:36:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.369 08:36:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:33.369 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:33.369 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.369 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.369 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.369 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.369 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.370 08:36:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.370 08:36:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:33.370 08:36:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:33.370 08:36:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:33.370 08:36:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:38.646 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:38.646 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:38.646 Found net devices under 0000:86:00.0: cvl_0_0 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:38.646 Found net devices under 0000:86:00.1: cvl_0_1 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:38.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:24:38.646 00:24:38.646 --- 10.0.0.2 ping statistics --- 00:24:38.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.646 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:24:38.646 00:24:38.646 --- 10.0.0.1 ping statistics --- 00:24:38.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.646 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=384342 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 384342 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 384342 ']' 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:38.646 08:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.647 08:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:38.647 08:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.647 [2024-05-15 08:36:25.376947] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:24:38.647 [2024-05-15 08:36:25.376989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.647 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.647 [2024-05-15 08:36:25.432770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:38.647 [2024-05-15 08:36:25.513796] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.647 [2024-05-15 08:36:25.513830] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.647 [2024-05-15 08:36:25.513837] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.647 [2024-05-15 08:36:25.513843] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.647 [2024-05-15 08:36:25.513848] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.647 [2024-05-15 08:36:25.513883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.647 [2024-05-15 08:36:25.513980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.647 [2024-05-15 08:36:25.514042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.647 [2024-05-15 08:36:25.514043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.213 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:39.213 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:24:39.213 08:36:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:39.213 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.213 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.213 [2024-05-15 08:36:26.207983] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.213 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.213 08:36:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:24:39.213 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:39.213 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.472 Malloc1 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.472 [2024-05-15 08:36:26.291733] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:39.472 [2024-05-15 08:36:26.291956] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:39.472 08:36:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:39.731 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:39.731 fio-3.35 00:24:39.731 Starting 1 thread 00:24:39.731 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.266 [2024-05-15 08:36:28.881581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2b0 is same with the state(5) to be set 00:24:42.266 [2024-05-15 08:36:28.881633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2b0 is same with the state(5) to be set 00:24:42.266 [2024-05-15 08:36:28.881641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2b0 is same with the state(5) to be set 00:24:42.266 [2024-05-15 08:36:28.881647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2b0 is same with the state(5) to be set 00:24:42.266 [2024-05-15 08:36:28.881653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2b0 is same with the state(5) to be set 00:24:42.266 00:24:42.266 test: (groupid=0, jobs=1): err= 0: pid=384712: Wed May 15 08:36:28 2024 00:24:42.266 read: IOPS=11.6k, BW=45.2MiB/s (47.4MB/s)(90.6MiB/2006msec) 00:24:42.266 slat (nsec): min=1600, max=239081, avg=1743.55, stdev=2239.12 00:24:42.266 clat (usec): min=3171, max=10500, avg=6102.09, stdev=468.85 00:24:42.266 lat (usec): min=3200, max=10502, avg=6103.84, stdev=468.79 00:24:42.266 clat percentiles (usec): 00:24:42.266 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:24:42.266 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6194], 00:24:42.266 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6783], 00:24:42.266 | 99.00th=[ 7111], 99.50th=[ 7308], 99.90th=[ 8094], 99.95th=[ 9372], 00:24:42.266 | 99.99th=[10421] 00:24:42.266 bw ( KiB/s): min=45384, max=46952, per=100.00%, avg=46274.00, stdev=655.33, samples=4 00:24:42.266 iops : min=11346, max=11738, avg=11568.50, stdev=163.83, samples=4 00:24:42.266 write: IOPS=11.5k, BW=44.9MiB/s (47.1MB/s)(90.0MiB/2006msec); 0 zone resets 00:24:42.266 slat (nsec): min=1651, max=223086, avg=1821.75, stdev=1648.89 00:24:42.266 clat (usec): min=2439, max=10470, avg=4939.54, stdev=404.60 00:24:42.266 lat (usec): min=2454, max=10472, avg=4941.37, stdev=404.61 00:24:42.266 clat percentiles (usec): 00:24:42.266 | 1.00th=[ 4015], 5.00th=[ 4359], 10.00th=[ 4490], 20.00th=[ 4621], 00:24:42.266 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4948], 60.00th=[ 5014], 00:24:42.266 | 70.00th=[ 5145], 80.00th=[ 5211], 90.00th=[ 5407], 95.00th=[ 5538], 00:24:42.266 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 8094], 99.95th=[ 9372], 00:24:42.266 | 99.99th=[10421] 00:24:42.266 bw ( KiB/s): min=45648, max=46400, per=100.00%, avg=45964.00, stdev=370.74, samples=4 00:24:42.266 iops : min=11412, max=11600, avg=11491.00, stdev=92.69, samples=4 00:24:42.266 lat (msec) : 4=0.46%, 10=99.51%, 20=0.03% 00:24:42.266 cpu : usr=74.41%, sys=24.49%, ctx=95, majf=0, minf=4 00:24:42.266 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:42.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:42.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:42.266 issued rwts: total=23199,23047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:42.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:42.266 00:24:42.266 Run status group 0 (all jobs): 00:24:42.266 READ: bw=45.2MiB/s (47.4MB/s), 45.2MiB/s-45.2MiB/s (47.4MB/s-47.4MB/s), io=90.6MiB (95.0MB), run=2006-2006msec 00:24:42.266 WRITE: bw=44.9MiB/s (47.1MB/s), 44.9MiB/s-44.9MiB/s (47.1MB/s-47.1MB/s), io=90.0MiB (94.4MB), run=2006-2006msec 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:42.266 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:42.267 08:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:42.267 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:42.267 fio-3.35 00:24:42.267 Starting 1 thread 00:24:42.267 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.173 [2024-05-15 08:36:30.674552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e570 is same with the state(5) to be set 00:24:44.173 [2024-05-15 08:36:30.674594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e570 is same with the state(5) to be set 00:24:44.743 00:24:44.743 test: (groupid=0, jobs=1): err= 0: pid=385284: Wed May 15 08:36:31 2024 00:24:44.743 read: IOPS=10.9k, BW=170MiB/s (178MB/s)(340MiB/2007msec) 00:24:44.743 slat (nsec): min=2569, max=82384, avg=2870.78, stdev=1128.16 00:24:44.743 clat (usec): min=1537, max=12677, avg=6774.30, stdev=1514.88 00:24:44.743 lat (usec): min=1540, max=12680, avg=6777.17, stdev=1514.94 00:24:44.743 clat percentiles (usec): 00:24:44.743 | 1.00th=[ 3589], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 5473], 00:24:44.743 | 30.00th=[ 5932], 40.00th=[ 6325], 50.00th=[ 6718], 60.00th=[ 7177], 00:24:44.743 | 70.00th=[ 7504], 80.00th=[ 7832], 90.00th=[ 8717], 95.00th=[ 9503], 00:24:44.743 | 99.00th=[10683], 99.50th=[10945], 99.90th=[12125], 99.95th=[12256], 00:24:44.743 | 99.99th=[12649] 00:24:44.743 bw ( KiB/s): min=82240, max=93184, per=50.47%, avg=87632.00, stdev=6083.79, samples=4 00:24:44.743 iops : min= 5140, max= 5824, avg=5477.00, stdev=380.24, samples=4 00:24:44.743 write: IOPS=6405, BW=100MiB/s (105MB/s)(180MiB/1794msec); 0 zone resets 00:24:44.743 slat (usec): min=29, max=251, avg=31.95, stdev= 4.62 00:24:44.743 clat (usec): min=4893, max=14813, avg=8774.47, stdev=1449.33 00:24:44.743 lat (usec): min=4925, max=14844, avg=8806.42, stdev=1449.60 00:24:44.743 clat percentiles (usec): 00:24:44.743 | 1.00th=[ 5932], 5.00th=[ 6718], 10.00th=[ 7046], 20.00th=[ 7504], 00:24:44.743 | 30.00th=[ 7963], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8979], 00:24:44.743 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[11338], 00:24:44.743 | 99.00th=[12911], 99.50th=[13435], 99.90th=[14353], 99.95th=[14615], 00:24:44.743 | 99.99th=[14746] 00:24:44.743 bw ( KiB/s): min=85600, max=97280, per=88.97%, avg=91192.00, stdev=5891.85, samples=4 00:24:44.743 iops : min= 5350, max= 6080, avg=5699.50, stdev=368.24, samples=4 00:24:44.743 lat (msec) : 2=0.05%, 4=1.65%, 10=90.19%, 20=8.12% 00:24:44.743 cpu : usr=87.54%, sys=11.71%, ctx=37, majf=0, minf=1 00:24:44.743 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:44.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:44.743 issued rwts: total=21781,11492,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.743 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:44.743 00:24:44.743 Run status group 0 (all jobs): 00:24:44.743 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=340MiB (357MB), run=2007-2007msec 00:24:44.743 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=180MiB (188MB), run=1794-1794msec 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:44.743 rmmod nvme_tcp 00:24:44.743 rmmod nvme_fabrics 00:24:44.743 rmmod nvme_keyring 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 384342 ']' 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 384342 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 384342 ']' 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 384342 00:24:44.743 08:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:24:45.003 08:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:45.003 08:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 384342 00:24:45.003 08:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:45.003 08:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:45.003 08:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 384342' 00:24:45.003 killing process with pid 384342 00:24:45.003 08:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 384342 00:24:45.003 [2024-05-15 08:36:31.808926] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:45.003 08:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 384342 00:24:45.262 08:36:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:45.262 08:36:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:45.262 08:36:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:45.262 08:36:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:45.262 08:36:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:45.262 08:36:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.262 08:36:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.262 08:36:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.170 08:36:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:47.170 00:24:47.170 real 0m14.217s 00:24:47.170 user 0m41.866s 00:24:47.170 sys 0m5.789s 00:24:47.170 08:36:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:47.170 08:36:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.170 ************************************ 00:24:47.170 END TEST nvmf_fio_host 00:24:47.170 ************************************ 00:24:47.170 08:36:34 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:47.170 08:36:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:47.170 08:36:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:47.170 08:36:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:47.170 ************************************ 00:24:47.170 START TEST nvmf_failover 00:24:47.170 ************************************ 00:24:47.170 08:36:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:47.430 * Looking for test storage... 00:24:47.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:47.430 08:36:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:52.707 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:52.707 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:52.707 Found net devices under 0000:86:00.0: cvl_0_0 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:52.707 Found net devices under 0000:86:00.1: cvl_0_1 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.707 08:36:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:52.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:24:52.707 00:24:52.707 --- 10.0.0.2 ping statistics --- 00:24:52.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.707 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:24:52.707 00:24:52.707 --- 10.0.0.1 ping statistics --- 00:24:52.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.707 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=389021 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 389021 00:24:52.707 08:36:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 389021 ']' 00:24:52.708 08:36:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.708 08:36:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:52.708 08:36:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.708 08:36:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:52.708 08:36:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:52.708 08:36:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:52.708 [2024-05-15 08:36:39.277786] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:24:52.708 [2024-05-15 08:36:39.277829] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.708 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.708 [2024-05-15 08:36:39.333613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:52.708 [2024-05-15 08:36:39.413020] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.708 [2024-05-15 08:36:39.413053] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.708 [2024-05-15 08:36:39.413060] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.708 [2024-05-15 08:36:39.413066] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.708 [2024-05-15 08:36:39.413071] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.708 [2024-05-15 08:36:39.413192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.708 [2024-05-15 08:36:39.413209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:52.708 [2024-05-15 08:36:39.413211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.277 08:36:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:53.277 08:36:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:53.277 08:36:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:53.277 08:36:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:53.277 08:36:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:53.277 08:36:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.277 08:36:40 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:53.277 [2024-05-15 08:36:40.286668] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.537 08:36:40 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:53.537 Malloc0 00:24:53.537 08:36:40 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:53.797 08:36:40 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:54.056 08:36:40 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.056 [2024-05-15 08:36:41.054793] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:54.056 [2024-05-15 08:36:41.055018] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.315 08:36:41 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:54.315 [2024-05-15 08:36:41.231493] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:54.315 08:36:41 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:54.574 [2024-05-15 08:36:41.412076] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:54.574 08:36:41 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:54.574 08:36:41 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=389289 00:24:54.574 08:36:41 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:54.574 08:36:41 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 389289 /var/tmp/bdevperf.sock 00:24:54.574 08:36:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 389289 ']' 00:24:54.574 08:36:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:54.575 08:36:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:54.575 08:36:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:54.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:54.575 08:36:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:54.575 08:36:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:55.513 08:36:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:55.513 08:36:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:55.513 08:36:42 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:55.773 NVMe0n1 00:24:55.773 08:36:42 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:56.059 00:24:56.059 08:36:42 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:56.059 08:36:42 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=389530 00:24:56.059 08:36:42 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:57.095 08:36:43 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.095 [2024-05-15 08:36:44.077565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 [2024-05-15 08:36:44.077732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882f00 is same with the state(5) to be set 00:24:57.095 08:36:44 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:00.549 08:36:47 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:00.549 00:25:00.549 08:36:47 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:00.855 [2024-05-15 08:36:47.672981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.855 [2024-05-15 08:36:47.673147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 [2024-05-15 08:36:47.673399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883d90 is same with the state(5) to be set 00:25:00.856 08:36:47 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:04.267 08:36:50 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.267 [2024-05-15 08:36:50.855293] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.267 08:36:50 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:05.204 08:36:51 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:05.204 [2024-05-15 08:36:52.053128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 [2024-05-15 08:36:52.053398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1884c60 is same with the state(5) to be set 00:25:05.204 08:36:52 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 389530 00:25:11.791 0 00:25:11.791 08:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 389289 00:25:11.791 08:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 389289 ']' 00:25:11.791 08:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 389289 00:25:11.791 08:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:25:11.791 08:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:11.791 08:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 389289 00:25:11.791 08:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:11.791 08:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:11.791 08:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 389289' 00:25:11.791 killing process with pid 389289 00:25:11.791 08:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 389289 00:25:11.791 08:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 389289 00:25:11.791 08:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:11.791 [2024-05-15 08:36:41.483329] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:25:11.791 [2024-05-15 08:36:41.483380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid389289 ] 00:25:11.791 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.791 [2024-05-15 08:36:41.538235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.791 [2024-05-15 08:36:41.613945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.791 Running I/O for 15 seconds... 00:25:11.791 [2024-05-15 08:36:44.078842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.791 [2024-05-15 08:36:44.078879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.078895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.791 [2024-05-15 08:36:44.078905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.078914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.791 [2024-05-15 08:36:44.078922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.078930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.791 [2024-05-15 08:36:44.078937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.078944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.791 [2024-05-15 08:36:44.078951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.078959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.791 [2024-05-15 08:36:44.078966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.078974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.791 [2024-05-15 08:36:44.078981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.078990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.791 [2024-05-15 08:36:44.078997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.079008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.791 [2024-05-15 08:36:44.079015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.079023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.791 [2024-05-15 08:36:44.079030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.079038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.791 [2024-05-15 08:36:44.079045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.079058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.791 [2024-05-15 08:36:44.079065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.079074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.791 [2024-05-15 08:36:44.079081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.079089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.791 [2024-05-15 08:36:44.079096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.079104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.791 [2024-05-15 08:36:44.079111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.079119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.791 [2024-05-15 08:36:44.079126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.079134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.791 [2024-05-15 08:36:44.079142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.079150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.791 [2024-05-15 08:36:44.079156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.079171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.791 [2024-05-15 08:36:44.079179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.079187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.791 [2024-05-15 08:36:44.079194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.791 [2024-05-15 08:36:44.079202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.791 [2024-05-15 08:36:44.079209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.792 [2024-05-15 08:36:44.079839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.792 [2024-05-15 08:36:44.079847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.079856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.079863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.079871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.079878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.079886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.079899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.079907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.079914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.079923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.079929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.079937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.079944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.079951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.079958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.079966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.079974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.079982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.079988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.079996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.793 [2024-05-15 08:36:44.080351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.793 [2024-05-15 08:36:44.080379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97176 len:8 PRP1 0x0 PRP2 0x0 00:25:11.793 [2024-05-15 08:36:44.080385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.793 [2024-05-15 08:36:44.080402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.793 [2024-05-15 08:36:44.080409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97184 len:8 PRP1 0x0 PRP2 0x0 00:25:11.793 [2024-05-15 08:36:44.080416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.793 [2024-05-15 08:36:44.080428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.793 [2024-05-15 08:36:44.080433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97192 len:8 PRP1 0x0 PRP2 0x0 00:25:11.793 [2024-05-15 08:36:44.080439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.793 [2024-05-15 08:36:44.080452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.793 [2024-05-15 08:36:44.080458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97200 len:8 PRP1 0x0 PRP2 0x0 00:25:11.793 [2024-05-15 08:36:44.080464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.793 [2024-05-15 08:36:44.080477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.793 [2024-05-15 08:36:44.080482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97208 len:8 PRP1 0x0 PRP2 0x0 00:25:11.793 [2024-05-15 08:36:44.080489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.793 [2024-05-15 08:36:44.080495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97216 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97224 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97232 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97240 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97248 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97256 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97264 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97272 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97280 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97288 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97296 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97304 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97312 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97320 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97328 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97336 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97344 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97352 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97360 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97368 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.080977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.080984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.080989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.080995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97376 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.081001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.081010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.081015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.081021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97384 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.081027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.081033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.081038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.081044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97392 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.081051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.081058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.794 [2024-05-15 08:36:44.081063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.794 [2024-05-15 08:36:44.081069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97400 len:8 PRP1 0x0 PRP2 0x0 00:25:11.794 [2024-05-15 08:36:44.091992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.794 [2024-05-15 08:36:44.092006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.795 [2024-05-15 08:36:44.092014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.795 [2024-05-15 08:36:44.092021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97408 len:8 PRP1 0x0 PRP2 0x0 00:25:11.795 [2024-05-15 08:36:44.092031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:44.092040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.795 [2024-05-15 08:36:44.092047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.795 [2024-05-15 08:36:44.092054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97416 len:8 PRP1 0x0 PRP2 0x0 00:25:11.795 [2024-05-15 08:36:44.092063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:44.092072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.795 [2024-05-15 08:36:44.092079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.795 [2024-05-15 08:36:44.092087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97424 len:8 PRP1 0x0 PRP2 0x0 00:25:11.795 [2024-05-15 08:36:44.092096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:44.092105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.795 [2024-05-15 08:36:44.092113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.795 [2024-05-15 08:36:44.092121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97432 len:8 PRP1 0x0 PRP2 0x0 00:25:11.795 [2024-05-15 08:36:44.092130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:44.092187] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe3a230 was disconnected and freed. reset controller. 00:25:11.795 [2024-05-15 08:36:44.092206] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:11.795 [2024-05-15 08:36:44.092231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.795 [2024-05-15 08:36:44.092246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:44.092257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.795 [2024-05-15 08:36:44.092266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:44.092276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.795 [2024-05-15 08:36:44.092286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:44.092297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.795 [2024-05-15 08:36:44.092307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:44.092317] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.795 [2024-05-15 08:36:44.092354] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1b400 (9): Bad file descriptor 00:25:11.795 [2024-05-15 08:36:44.096254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.795 [2024-05-15 08:36:44.165205] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:11.795 [2024-05-15 08:36:47.675230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.795 [2024-05-15 08:36:47.675267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.795 [2024-05-15 08:36:47.675291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.795 [2024-05-15 08:36:47.675307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.795 [2024-05-15 08:36:47.675323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.795 [2024-05-15 08:36:47.675338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.795 [2024-05-15 08:36:47.675353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.795 [2024-05-15 08:36:47.675368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.795 [2024-05-15 08:36:47.675387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.795 [2024-05-15 08:36:47.675402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.795 [2024-05-15 08:36:47.675418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.795 [2024-05-15 08:36:47.675433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.795 [2024-05-15 08:36:47.675448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.795 [2024-05-15 08:36:47.675464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.795 [2024-05-15 08:36:47.675479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.795 [2024-05-15 08:36:47.675494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.795 [2024-05-15 08:36:47.675509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.795 [2024-05-15 08:36:47.675524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.795 [2024-05-15 08:36:47.675538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.795 [2024-05-15 08:36:47.675553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.795 [2024-05-15 08:36:47.675562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.796 [2024-05-15 08:36:47.675615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.796 [2024-05-15 08:36:47.675630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.796 [2024-05-15 08:36:47.675644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.796 [2024-05-15 08:36:47.675660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.796 [2024-05-15 08:36:47.675674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.796 [2024-05-15 08:36:47.675689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.796 [2024-05-15 08:36:47.675703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.796 [2024-05-15 08:36:47.675719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.675986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.675993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.676001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.676008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.676017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.676023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.676032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.676038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.676047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.676053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.676061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.676068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.676077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.676083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.676091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.676097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.676105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.676112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.676120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.676127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.676134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.676141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.676150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.676156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.676169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.676176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.796 [2024-05-15 08:36:47.676185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.796 [2024-05-15 08:36:47.676191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.797 [2024-05-15 08:36:47.676673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.797 [2024-05-15 08:36:47.676699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40800 len:8 PRP1 0x0 PRP2 0x0 00:25:11.797 [2024-05-15 08:36:47.676706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.797 [2024-05-15 08:36:47.676741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.797 [2024-05-15 08:36:47.676757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.797 [2024-05-15 08:36:47.676773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.797 [2024-05-15 08:36:47.676786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1b400 is same with the state(5) to be set 00:25:11.797 [2024-05-15 08:36:47.676963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.797 [2024-05-15 08:36:47.676971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.797 [2024-05-15 08:36:47.676977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40808 len:8 PRP1 0x0 PRP2 0x0 00:25:11.797 [2024-05-15 08:36:47.676983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.797 [2024-05-15 08:36:47.676992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.797 [2024-05-15 08:36:47.676997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.797 [2024-05-15 08:36:47.677003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40816 len:8 PRP1 0x0 PRP2 0x0 00:25:11.797 [2024-05-15 08:36:47.677010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40824 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40832 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40840 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40848 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40856 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40864 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40872 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40880 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40888 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40896 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40904 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40912 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40920 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40928 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40936 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40944 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40952 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40968 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.677503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40976 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.677510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.677516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.677521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.686818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40984 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.686828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.686836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.686843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.686849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40992 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.686855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.686862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.686867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.686873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41000 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.686879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.686886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.798 [2024-05-15 08:36:47.686891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.798 [2024-05-15 08:36:47.686897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41008 len:8 PRP1 0x0 PRP2 0x0 00:25:11.798 [2024-05-15 08:36:47.686903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.798 [2024-05-15 08:36:47.686910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.686916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.686924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41016 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.686930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.686937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.686942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.686948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40072 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.686955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.686962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.686967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.686973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40080 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.686979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.686989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.686994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40088 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40096 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40104 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40112 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40120 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40000 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40128 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40136 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40144 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40152 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40160 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40168 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40176 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40184 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40192 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40200 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40208 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40216 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40224 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40232 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.799 [2024-05-15 08:36:47.687493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.799 [2024-05-15 08:36:47.687498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40240 len:8 PRP1 0x0 PRP2 0x0 00:25:11.799 [2024-05-15 08:36:47.687504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.799 [2024-05-15 08:36:47.687511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40248 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40256 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40264 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40272 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40280 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40288 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40008 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40016 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40024 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40032 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40040 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40048 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40056 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40064 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40296 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40304 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40312 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40320 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.687974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40328 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.687981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.687988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.800 [2024-05-15 08:36:47.687995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.800 [2024-05-15 08:36:47.688002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40336 len:8 PRP1 0x0 PRP2 0x0 00:25:11.800 [2024-05-15 08:36:47.688008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.800 [2024-05-15 08:36:47.688015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.688020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.688026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40344 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.688032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.688039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.688044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.688050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40352 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.688056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.688063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.688068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.688073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40360 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.688080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.688086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.688092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.688097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40368 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.688104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.688110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.688115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.688120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40376 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.688127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.688134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.688139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.688146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40384 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.688152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.688159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.688168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.688174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40392 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.688180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.688186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.688192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.688199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40400 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.688207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.688214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.688219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.688224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40408 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.688231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.688238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.688243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.688249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40416 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.688255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.688262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.688267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.688273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40424 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.688280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.688286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.688292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.688297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40432 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.688304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.688311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.688316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.688322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40440 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.688328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.688337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.688343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.688348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40448 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.688354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.688361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.688366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.688372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40456 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.688379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.688386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.688392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.688398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40464 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.694921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.694935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.694943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.694951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40472 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.694960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.694970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.694977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.694985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40480 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.694994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.695003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.695010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.695018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40488 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.695026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.695036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.695042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.695052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40496 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.695061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.695070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.695077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.695085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40504 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.695096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.695106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.695113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.695121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40512 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.695131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.695140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.695147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.695156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40520 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.695176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.695188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.801 [2024-05-15 08:36:47.695197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.801 [2024-05-15 08:36:47.695205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40528 len:8 PRP1 0x0 PRP2 0x0 00:25:11.801 [2024-05-15 08:36:47.695215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.801 [2024-05-15 08:36:47.695224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40536 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40544 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40552 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40560 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40568 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40576 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40584 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40592 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40600 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40608 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40616 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40624 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40632 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40640 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40648 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40656 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40664 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40672 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40680 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40688 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40696 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40704 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.695966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.695973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.695981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40712 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.695990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.696000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.802 [2024-05-15 08:36:47.696007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.802 [2024-05-15 08:36:47.696016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40720 len:8 PRP1 0x0 PRP2 0x0 00:25:11.802 [2024-05-15 08:36:47.696025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.802 [2024-05-15 08:36:47.696035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.803 [2024-05-15 08:36:47.696042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.803 [2024-05-15 08:36:47.696049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40728 len:8 PRP1 0x0 PRP2 0x0 00:25:11.803 [2024-05-15 08:36:47.696059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:47.696069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.803 [2024-05-15 08:36:47.696075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.803 [2024-05-15 08:36:47.696083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40736 len:8 PRP1 0x0 PRP2 0x0 00:25:11.803 [2024-05-15 08:36:47.696092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:47.696101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.803 [2024-05-15 08:36:47.696108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.803 [2024-05-15 08:36:47.696116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40744 len:8 PRP1 0x0 PRP2 0x0 00:25:11.803 [2024-05-15 08:36:47.696124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:47.696134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.803 [2024-05-15 08:36:47.696141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.803 [2024-05-15 08:36:47.696148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40752 len:8 PRP1 0x0 PRP2 0x0 00:25:11.803 [2024-05-15 08:36:47.696159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:47.696172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.803 [2024-05-15 08:36:47.696180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.803 [2024-05-15 08:36:47.696188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40760 len:8 PRP1 0x0 PRP2 0x0 00:25:11.803 [2024-05-15 08:36:47.696197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:47.696207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.803 [2024-05-15 08:36:47.696214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.803 [2024-05-15 08:36:47.696221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40768 len:8 PRP1 0x0 PRP2 0x0 00:25:11.803 [2024-05-15 08:36:47.696231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:47.696240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.803 [2024-05-15 08:36:47.696248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.803 [2024-05-15 08:36:47.696256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40776 len:8 PRP1 0x0 PRP2 0x0 00:25:11.803 [2024-05-15 08:36:47.696265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:47.696275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.803 [2024-05-15 08:36:47.696282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.803 [2024-05-15 08:36:47.696290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40784 len:8 PRP1 0x0 PRP2 0x0 00:25:11.803 [2024-05-15 08:36:47.696300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:47.696309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.803 [2024-05-15 08:36:47.696317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.803 [2024-05-15 08:36:47.696325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40792 len:8 PRP1 0x0 PRP2 0x0 00:25:11.803 [2024-05-15 08:36:47.696335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:47.696344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.803 [2024-05-15 08:36:47.696352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.803 [2024-05-15 08:36:47.696360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40800 len:8 PRP1 0x0 PRP2 0x0 00:25:11.803 [2024-05-15 08:36:47.696370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:47.696416] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfe4e00 was disconnected and freed. reset controller. 00:25:11.803 [2024-05-15 08:36:47.696428] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:11.803 [2024-05-15 08:36:47.696438] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.803 [2024-05-15 08:36:47.696476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1b400 (9): Bad file descriptor 00:25:11.803 [2024-05-15 08:36:47.700459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.803 [2024-05-15 08:36:47.823926] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:11.803 [2024-05-15 08:36:52.054442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:68256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:68264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:68288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:68328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:68352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:68368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:68408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.803 [2024-05-15 08:36:52.054813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.803 [2024-05-15 08:36:52.054819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.054828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.804 [2024-05-15 08:36:52.054834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.054842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.804 [2024-05-15 08:36:52.054849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.054858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.804 [2024-05-15 08:36:52.054865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.054874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.804 [2024-05-15 08:36:52.054881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.054889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.804 [2024-05-15 08:36:52.054896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.054904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:68464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.804 [2024-05-15 08:36:52.054911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.054919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.804 [2024-05-15 08:36:52.054926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.054934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:68480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.804 [2024-05-15 08:36:52.054942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.054950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.804 [2024-05-15 08:36:52.054957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.054966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.054972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.054980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.054988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.054996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.804 [2024-05-15 08:36:52.055342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.804 [2024-05-15 08:36:52.055350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.805 [2024-05-15 08:36:52.055712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.805 [2024-05-15 08:36:52.055747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68888 len:8 PRP1 0x0 PRP2 0x0 00:25:11.805 [2024-05-15 08:36:52.055755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.805 [2024-05-15 08:36:52.055772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.805 [2024-05-15 08:36:52.055777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68896 len:8 PRP1 0x0 PRP2 0x0 00:25:11.805 [2024-05-15 08:36:52.055784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.805 [2024-05-15 08:36:52.055797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.805 [2024-05-15 08:36:52.055803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68904 len:8 PRP1 0x0 PRP2 0x0 00:25:11.805 [2024-05-15 08:36:52.055811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.805 [2024-05-15 08:36:52.055826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.805 [2024-05-15 08:36:52.055832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68912 len:8 PRP1 0x0 PRP2 0x0 00:25:11.805 [2024-05-15 08:36:52.055839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.805 [2024-05-15 08:36:52.055853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.805 [2024-05-15 08:36:52.055859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68920 len:8 PRP1 0x0 PRP2 0x0 00:25:11.805 [2024-05-15 08:36:52.055867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.805 [2024-05-15 08:36:52.055880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.805 [2024-05-15 08:36:52.055886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68928 len:8 PRP1 0x0 PRP2 0x0 00:25:11.805 [2024-05-15 08:36:52.055896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.805 [2024-05-15 08:36:52.055908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.805 [2024-05-15 08:36:52.055914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68936 len:8 PRP1 0x0 PRP2 0x0 00:25:11.805 [2024-05-15 08:36:52.055920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.805 [2024-05-15 08:36:52.055933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.805 [2024-05-15 08:36:52.055939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68944 len:8 PRP1 0x0 PRP2 0x0 00:25:11.805 [2024-05-15 08:36:52.055946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.805 [2024-05-15 08:36:52.055960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.805 [2024-05-15 08:36:52.055965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68952 len:8 PRP1 0x0 PRP2 0x0 00:25:11.805 [2024-05-15 08:36:52.055972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.055980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.805 [2024-05-15 08:36:52.055986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.805 [2024-05-15 08:36:52.055991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68960 len:8 PRP1 0x0 PRP2 0x0 00:25:11.805 [2024-05-15 08:36:52.055997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.805 [2024-05-15 08:36:52.056004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.805 [2024-05-15 08:36:52.056010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.805 [2024-05-15 08:36:52.056015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68968 len:8 PRP1 0x0 PRP2 0x0 00:25:11.805 [2024-05-15 08:36:52.056022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68976 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68984 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68992 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69000 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69008 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69016 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69024 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69032 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69040 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69048 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69056 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69064 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69072 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69080 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69088 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69096 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69104 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69112 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.806 [2024-05-15 08:36:52.056515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69120 len:8 PRP1 0x0 PRP2 0x0 00:25:11.806 [2024-05-15 08:36:52.056521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.806 [2024-05-15 08:36:52.056528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.806 [2024-05-15 08:36:52.056533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.807 [2024-05-15 08:36:52.056539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69128 len:8 PRP1 0x0 PRP2 0x0 00:25:11.807 [2024-05-15 08:36:52.056546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.807 [2024-05-15 08:36:52.056553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.807 [2024-05-15 08:36:52.056559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.807 [2024-05-15 08:36:52.056565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69136 len:8 PRP1 0x0 PRP2 0x0 00:25:11.807 [2024-05-15 08:36:52.056573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.807 [2024-05-15 08:36:52.056581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.807 [2024-05-15 08:36:52.056587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.807 [2024-05-15 08:36:52.056592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69144 len:8 PRP1 0x0 PRP2 0x0 00:25:11.807 [2024-05-15 08:36:52.056599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.807 [2024-05-15 08:36:52.056606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.807 [2024-05-15 08:36:52.056612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.807 [2024-05-15 08:36:52.067103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69152 len:8 PRP1 0x0 PRP2 0x0 00:25:11.807 [2024-05-15 08:36:52.067116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.807 [2024-05-15 08:36:52.067124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.807 [2024-05-15 08:36:52.067131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.807 [2024-05-15 08:36:52.067136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69160 len:8 PRP1 0x0 PRP2 0x0 00:25:11.807 [2024-05-15 08:36:52.067143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.807 [2024-05-15 08:36:52.067150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.807 [2024-05-15 08:36:52.067156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.807 [2024-05-15 08:36:52.067162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69168 len:8 PRP1 0x0 PRP2 0x0 00:25:11.807 [2024-05-15 08:36:52.067176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.807 [2024-05-15 08:36:52.067183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.807 [2024-05-15 08:36:52.067189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.807 [2024-05-15 08:36:52.067194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69176 len:8 PRP1 0x0 PRP2 0x0 00:25:11.807 [2024-05-15 08:36:52.067204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.807 [2024-05-15 08:36:52.067211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.807 [2024-05-15 08:36:52.067216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.807 [2024-05-15 08:36:52.067222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69184 len:8 PRP1 0x0 PRP2 0x0 00:25:11.807 [2024-05-15 08:36:52.067230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.807 [2024-05-15 08:36:52.067238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.807 [2024-05-15 08:36:52.067244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.807 [2024-05-15 08:36:52.067249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69192 len:8 PRP1 0x0 PRP2 0x0 00:25:11.807 [2024-05-15 08:36:52.067257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.807 [2024-05-15 08:36:52.067265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.807 [2024-05-15 08:36:52.067271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.807 [2024-05-15 08:36:52.067276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69200 len:8 PRP1 0x0 PRP2 0x0 00:25:11.807 [2024-05-15 08:36:52.067285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.807 [2024-05-15 08:36:52.067294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.807 [2024-05-15 08:36:52.067300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.807 [2024-05-15 08:36:52.067306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69208 len:8 PRP1 0x0 PRP2 0x0 00:25:11.807 [2024-05-15 08:36:52.067313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.807 [2024-05-15 08:36:52.067320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.807 [2024-05-15 08:36:52.067326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.807 [2024-05-15 08:36:52.067333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69216 len:8 PRP1 0x0 PRP2 0x0 00:25:11.807 [2024-05-15 08:36:52.067340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.807 [2024-05-15 08:36:52.067347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.807 [2024-05-15 08:36:52.067352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.807 [2024-05-15 08:36:52.067358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69224 len:8 PRP1 0x0 PRP2 0x0 00:25:11.807 [2024-05-15 08:36:52.067366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.807 [2024-05-15 08:36:52.067373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.807 [2024-05-15 08:36:52.067379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.807 [2024-05-15 08:36:52.067385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69232 len:8 PRP1 0x0 PRP2 0x0 00:25:11.807 [2024-05-15 08:36:52.067392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.807 [2024-05-15 08:36:52.067400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.807 [2024-05-15 08:36:52.067407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.807 [2024-05-15 08:36:52.067413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69240 len:8 PRP1 0x0 PRP2 0x0 00:25:11.807 [2024-05-15 08:36:52.067420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.807 [2024-05-15 08:36:52.067427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.807 [2024-05-15 08:36:52.067433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.807 [2024-05-15 08:36:52.067440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69248 len:8 PRP1 0x0 PRP2 0x0 00:25:11.807 [2024-05-15 08:36:52.067447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.807 [2024-05-15 08:36:52.067454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.807 [2024-05-15 08:36:52.067459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.807 [2024-05-15 08:36:52.067466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69256 len:8 PRP1 0x0 PRP2 0x0 00:25:11.808 [2024-05-15 08:36:52.067473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.808 [2024-05-15 08:36:52.067493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.808 [2024-05-15 08:36:52.067502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.808 [2024-05-15 08:36:52.067510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69264 len:8 PRP1 0x0 PRP2 0x0 00:25:11.808 [2024-05-15 08:36:52.067521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.808 [2024-05-15 08:36:52.067532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.808 [2024-05-15 08:36:52.067539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.808 [2024-05-15 08:36:52.067548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68496 len:8 PRP1 0x0 PRP2 0x0 00:25:11.808 [2024-05-15 08:36:52.067557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.808 [2024-05-15 08:36:52.067604] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe3eb00 was disconnected and freed. reset controller. 00:25:11.808 [2024-05-15 08:36:52.067617] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:11.808 [2024-05-15 08:36:52.067641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.808 [2024-05-15 08:36:52.067652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.808 [2024-05-15 08:36:52.067663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.808 [2024-05-15 08:36:52.067673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.808 [2024-05-15 08:36:52.067684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.808 [2024-05-15 08:36:52.067693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.808 [2024-05-15 08:36:52.067703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.808 [2024-05-15 08:36:52.067712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.808 [2024-05-15 08:36:52.067724] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.808 [2024-05-15 08:36:52.067762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1b400 (9): Bad file descriptor 00:25:11.808 [2024-05-15 08:36:52.071684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.808 [2024-05-15 08:36:52.146837] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:11.808 00:25:11.808 Latency(us) 00:25:11.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.808 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:11.808 Verification LBA range: start 0x0 length 0x4000 00:25:11.808 NVMe0n1 : 15.01 10946.35 42.76 782.12 0.00 10890.74 427.41 28835.84 00:25:11.808 =================================================================================================================== 00:25:11.808 Total : 10946.35 42.76 782.12 0.00 10890.74 427.41 28835.84 00:25:11.808 Received shutdown signal, test time was about 15.000000 seconds 00:25:11.808 00:25:11.808 Latency(us) 00:25:11.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.808 =================================================================================================================== 00:25:11.808 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.808 08:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:11.808 08:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:11.808 08:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:11.808 08:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=392077 00:25:11.808 08:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 392077 /var/tmp/bdevperf.sock 00:25:11.808 08:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:11.808 08:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 392077 ']' 00:25:11.808 08:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:11.808 08:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:11.808 08:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:11.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:11.808 08:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:11.808 08:36:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:12.377 08:36:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:12.377 08:36:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:25:12.377 08:36:59 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:12.377 [2024-05-15 08:36:59.339180] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:12.377 08:36:59 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:12.635 [2024-05-15 08:36:59.507658] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:12.635 08:36:59 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:12.893 NVMe0n1 00:25:12.893 08:36:59 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:13.151 00:25:13.151 08:37:00 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:13.408 00:25:13.408 08:37:00 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:13.408 08:37:00 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:13.666 08:37:00 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:13.925 08:37:00 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:17.210 08:37:03 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.210 08:37:03 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:17.210 08:37:03 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=393002 00:25:17.210 08:37:03 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:17.210 08:37:03 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 393002 00:25:18.147 0 00:25:18.147 08:37:05 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:18.147 [2024-05-15 08:36:58.377419] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:25:18.147 [2024-05-15 08:36:58.377467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid392077 ] 00:25:18.147 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.147 [2024-05-15 08:36:58.431081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.147 [2024-05-15 08:36:58.499584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.147 [2024-05-15 08:37:00.745875] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:18.147 [2024-05-15 08:37:00.745922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.147 [2024-05-15 08:37:00.745932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.147 [2024-05-15 08:37:00.745941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.147 [2024-05-15 08:37:00.745948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.147 [2024-05-15 08:37:00.745955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.147 [2024-05-15 08:37:00.745962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.147 [2024-05-15 08:37:00.745969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.147 [2024-05-15 08:37:00.745975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.147 [2024-05-15 08:37:00.745982] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.147 [2024-05-15 08:37:00.746004] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.147 [2024-05-15 08:37:00.746018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139e400 (9): Bad file descriptor 00:25:18.147 [2024-05-15 08:37:00.879335] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:18.147 Running I/O for 1 seconds... 00:25:18.147 00:25:18.147 Latency(us) 00:25:18.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.147 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:18.147 Verification LBA range: start 0x0 length 0x4000 00:25:18.147 NVMe0n1 : 1.01 10888.92 42.53 0.00 0.00 11698.66 2322.25 14930.81 00:25:18.147 =================================================================================================================== 00:25:18.147 Total : 10888.92 42.53 0.00 0.00 11698.66 2322.25 14930.81 00:25:18.147 08:37:05 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:18.147 08:37:05 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:18.406 08:37:05 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.406 08:37:05 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:18.406 08:37:05 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:18.664 08:37:05 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.923 08:37:05 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:22.210 08:37:08 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.210 08:37:08 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:22.210 08:37:08 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 392077 00:25:22.210 08:37:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 392077 ']' 00:25:22.210 08:37:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 392077 00:25:22.210 08:37:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:25:22.210 08:37:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:22.210 08:37:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 392077 00:25:22.211 08:37:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:22.211 08:37:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:22.211 08:37:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 392077' 00:25:22.211 killing process with pid 392077 00:25:22.211 08:37:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 392077 00:25:22.211 08:37:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 392077 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:22.469 rmmod nvme_tcp 00:25:22.469 rmmod nvme_fabrics 00:25:22.469 rmmod nvme_keyring 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 389021 ']' 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 389021 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 389021 ']' 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 389021 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:22.469 08:37:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 389021 00:25:22.729 08:37:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:22.729 08:37:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:22.729 08:37:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 389021' 00:25:22.729 killing process with pid 389021 00:25:22.729 08:37:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 389021 00:25:22.729 [2024-05-15 08:37:09.530555] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:22.729 08:37:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 389021 00:25:22.988 08:37:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:22.988 08:37:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:22.988 08:37:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:22.988 08:37:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:22.988 08:37:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:22.988 08:37:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.988 08:37:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.988 08:37:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.901 08:37:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:24.901 00:25:24.901 real 0m37.646s 00:25:24.901 user 2m2.814s 00:25:24.901 sys 0m7.007s 00:25:24.901 08:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:24.901 08:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:24.901 ************************************ 00:25:24.901 END TEST nvmf_failover 00:25:24.901 ************************************ 00:25:24.901 08:37:11 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:24.901 08:37:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:24.901 08:37:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:24.901 08:37:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:24.901 ************************************ 00:25:24.901 START TEST nvmf_host_discovery 00:25:24.901 ************************************ 00:25:24.901 08:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:25.160 * Looking for test storage... 00:25:25.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:25.160 08:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.160 08:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:25.160 08:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.160 08:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.160 08:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.160 08:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.160 08:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:25.160 08:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.429 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.429 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:30.429 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:30.429 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:30.430 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:30.430 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:30.430 Found net devices under 0000:86:00.0: cvl_0_0 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:30.430 Found net devices under 0000:86:00.1: cvl_0_1 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:30.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:25:30.430 00:25:30.430 --- 10.0.0.2 ping statistics --- 00:25:30.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.430 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:25:30.430 00:25:30.430 --- 10.0.0.1 ping statistics --- 00:25:30.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.430 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=397435 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 397435 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 397435 ']' 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:30.430 08:37:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.430 [2024-05-15 08:37:17.388144] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:25:30.431 [2024-05-15 08:37:17.388192] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.431 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.431 [2024-05-15 08:37:17.443722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.689 [2024-05-15 08:37:17.522607] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.689 [2024-05-15 08:37:17.522641] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.689 [2024-05-15 08:37:17.522648] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.689 [2024-05-15 08:37:17.522655] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.689 [2024-05-15 08:37:17.522660] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.689 [2024-05-15 08:37:17.522680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.258 [2024-05-15 08:37:18.233493] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.258 [2024-05-15 08:37:18.245485] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:31.258 [2024-05-15 08:37:18.245649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.258 null0 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.258 null1 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=397617 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 397617 /tmp/host.sock 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 397617 ']' 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:31.258 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:31.258 08:37:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.518 [2024-05-15 08:37:18.317866] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:25:31.518 [2024-05-15 08:37:18.317906] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397617 ] 00:25:31.518 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.518 [2024-05-15 08:37:18.371837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.518 [2024-05-15 08:37:18.450819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.455 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.456 [2024-05-15 08:37:19.448818] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.456 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:25:32.715 08:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:25:33.283 [2024-05-15 08:37:20.193708] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:33.283 [2024-05-15 08:37:20.193732] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:33.283 [2024-05-15 08:37:20.193745] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:33.542 [2024-05-15 08:37:20.321147] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:33.542 [2024-05-15 08:37:20.545493] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:33.542 [2024-05-15 08:37:20.545512] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:33.801 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:33.802 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:33.802 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:33.802 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:33.802 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:33.802 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:33.802 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:33.802 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:33.802 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:33.802 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.802 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.061 [2024-05-15 08:37:20.948913] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:34.061 [2024-05-15 08:37:20.949030] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:34.061 [2024-05-15 08:37:20.949052] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:34.061 08:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.061 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.061 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:34.061 08:37:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:34.061 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:34.061 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:34.061 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:34.061 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:34.062 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.062 [2024-05-15 08:37:21.075420] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:34.321 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:34.321 08:37:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:25:34.321 [2024-05-15 08:37:21.296541] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:34.321 [2024-05-15 08:37:21.296558] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:34.321 [2024-05-15 08:37:21.296563] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:35.258 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.259 [2024-05-15 08:37:22.209066] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:35.259 [2024-05-15 08:37:22.209088] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:35.259 [2024-05-15 08:37:22.217359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.259 [2024-05-15 08:37:22.217377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.259 [2024-05-15 08:37:22.217386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.259 [2024-05-15 08:37:22.217393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.259 [2024-05-15 08:37:22.217400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.259 [2024-05-15 08:37:22.217407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.259 [2024-05-15 08:37:22.217414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.259 [2024-05-15 08:37:22.217421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.259 [2024-05-15 08:37:22.217427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88130 is same with the state(5) to be set 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.259 [2024-05-15 08:37:22.227373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd88130 (9): Bad file descriptor 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.259 [2024-05-15 08:37:22.237414] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.259 [2024-05-15 08:37:22.237716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-05-15 08:37:22.237953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-05-15 08:37:22.237965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd88130 with addr=10.0.0.2, port=4420 00:25:35.259 [2024-05-15 08:37:22.237977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88130 is same with the state(5) to be set 00:25:35.259 [2024-05-15 08:37:22.237990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd88130 (9): Bad file descriptor 00:25:35.259 [2024-05-15 08:37:22.238000] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.259 [2024-05-15 08:37:22.238006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.259 [2024-05-15 08:37:22.238014] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.259 [2024-05-15 08:37:22.238024] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.259 [2024-05-15 08:37:22.247469] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.259 [2024-05-15 08:37:22.247746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-05-15 08:37:22.247928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-05-15 08:37:22.247940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd88130 with addr=10.0.0.2, port=4420 00:25:35.259 [2024-05-15 08:37:22.247947] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88130 is same with the state(5) to be set 00:25:35.259 [2024-05-15 08:37:22.247959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd88130 (9): Bad file descriptor 00:25:35.259 [2024-05-15 08:37:22.247970] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.259 [2024-05-15 08:37:22.247978] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.259 [2024-05-15 08:37:22.247985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.259 [2024-05-15 08:37:22.247995] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.259 [2024-05-15 08:37:22.257522] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.259 [2024-05-15 08:37:22.257740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-05-15 08:37:22.257834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-05-15 08:37:22.257846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd88130 with addr=10.0.0.2, port=4420 00:25:35.259 [2024-05-15 08:37:22.257854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88130 is same with the state(5) to be set 00:25:35.259 [2024-05-15 08:37:22.257865] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd88130 (9): Bad file descriptor 00:25:35.259 [2024-05-15 08:37:22.257875] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.259 [2024-05-15 08:37:22.257883] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.259 [2024-05-15 08:37:22.257890] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.259 [2024-05-15 08:37:22.257899] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.259 [2024-05-15 08:37:22.267578] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.259 [2024-05-15 08:37:22.267838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.259 [2024-05-15 08:37:22.267950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-05-15 08:37:22.267965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd88130 with addr=10.0.0.2, port=4420 00:25:35.259 [2024-05-15 08:37:22.267973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88130 is same with the state(5) to be set 00:25:35.259 [2024-05-15 08:37:22.267984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd88130 (9): Bad file descriptor 00:25:35.259 [2024-05-15 08:37:22.268000] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.259 [2024-05-15 08:37:22.268007] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.259 [2024-05-15 08:37:22.268014] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.259 [2024-05-15 08:37:22.268023] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.259 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:35.259 [2024-05-15 08:37:22.277642] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.259 [2024-05-15 08:37:22.277867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-05-15 08:37:22.278047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-05-15 08:37:22.278060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd88130 with addr=10.0.0.2, port=4420 00:25:35.259 [2024-05-15 08:37:22.278068] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88130 is same with the state(5) to be set 00:25:35.259 [2024-05-15 08:37:22.278081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd88130 (9): Bad file descriptor 00:25:35.259 [2024-05-15 08:37:22.278101] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.259 [2024-05-15 08:37:22.278109] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.259 [2024-05-15 08:37:22.278116] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.259 [2024-05-15 08:37:22.278126] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.518 [2024-05-15 08:37:22.287702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.518 [2024-05-15 08:37:22.288017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.518 [2024-05-15 08:37:22.288205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.518 [2024-05-15 08:37:22.288217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd88130 with addr=10.0.0.2, port=4420 00:25:35.518 [2024-05-15 08:37:22.288225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88130 is same with the state(5) to be set 00:25:35.518 [2024-05-15 08:37:22.288237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd88130 (9): Bad file descriptor 00:25:35.518 [2024-05-15 08:37:22.288258] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.518 [2024-05-15 08:37:22.288265] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.518 [2024-05-15 08:37:22.288272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.518 [2024-05-15 08:37:22.288281] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.518 [2024-05-15 08:37:22.296284] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:35.518 [2024-05-15 08:37:22.296303] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:35.518 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.518 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:35.518 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:35.518 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:35.518 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:35.518 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:35.518 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.519 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.777 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.777 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:35.777 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:35.777 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:35.777 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:35.777 08:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.777 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.777 08:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.709 [2024-05-15 08:37:23.632648] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:36.709 [2024-05-15 08:37:23.632664] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:36.710 [2024-05-15 08:37:23.632674] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:36.968 [2024-05-15 08:37:23.759074] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:36.968 [2024-05-15 08:37:23.858492] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:36.968 [2024-05-15 08:37:23.858520] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.968 request: 00:25:36.968 { 00:25:36.968 "name": "nvme", 00:25:36.968 "trtype": "tcp", 00:25:36.968 "traddr": "10.0.0.2", 00:25:36.968 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:36.968 "adrfam": "ipv4", 00:25:36.968 "trsvcid": "8009", 00:25:36.968 "wait_for_attach": true, 00:25:36.968 "method": "bdev_nvme_start_discovery", 00:25:36.968 "req_id": 1 00:25:36.968 } 00:25:36.968 Got JSON-RPC error response 00:25:36.968 response: 00:25:36.968 { 00:25:36.968 "code": -17, 00:25:36.968 "message": "File exists" 00:25:36.968 } 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.968 request: 00:25:36.968 { 00:25:36.968 "name": "nvme_second", 00:25:36.968 "trtype": "tcp", 00:25:36.968 "traddr": "10.0.0.2", 00:25:36.968 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:36.968 "adrfam": "ipv4", 00:25:36.968 "trsvcid": "8009", 00:25:36.968 "wait_for_attach": true, 00:25:36.968 "method": "bdev_nvme_start_discovery", 00:25:36.968 "req_id": 1 00:25:36.968 } 00:25:36.968 Got JSON-RPC error response 00:25:36.968 response: 00:25:36.968 { 00:25:36.968 "code": -17, 00:25:36.968 "message": "File exists" 00:25:36.968 } 00:25:36.968 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:37.227 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:37.227 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:37.227 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:37.227 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:37.227 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:37.227 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:37.227 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:37.227 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.227 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:37.227 08:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.227 08:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.227 08:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.161 [2024-05-15 08:37:25.103603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.161 [2024-05-15 08:37:25.103728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.161 [2024-05-15 08:37:25.103740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb85a0 with addr=10.0.0.2, port=8010 00:25:38.161 [2024-05-15 08:37:25.103755] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:38.161 [2024-05-15 08:37:25.103762] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:38.161 [2024-05-15 08:37:25.103768] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:39.095 [2024-05-15 08:37:26.106130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.095 [2024-05-15 08:37:26.106439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.095 [2024-05-15 08:37:26.106459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb85a0 with addr=10.0.0.2, port=8010 00:25:39.095 [2024-05-15 08:37:26.106471] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:39.095 [2024-05-15 08:37:26.106478] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:39.095 [2024-05-15 08:37:26.106484] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:40.468 [2024-05-15 08:37:27.108299] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:40.468 request: 00:25:40.468 { 00:25:40.468 "name": "nvme_second", 00:25:40.468 "trtype": "tcp", 00:25:40.468 "traddr": "10.0.0.2", 00:25:40.468 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:40.469 "adrfam": "ipv4", 00:25:40.469 "trsvcid": "8010", 00:25:40.469 "attach_timeout_ms": 3000, 00:25:40.469 "method": "bdev_nvme_start_discovery", 00:25:40.469 "req_id": 1 00:25:40.469 } 00:25:40.469 Got JSON-RPC error response 00:25:40.469 response: 00:25:40.469 { 00:25:40.469 "code": -110, 00:25:40.469 "message": "Connection timed out" 00:25:40.469 } 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 397617 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:40.469 rmmod nvme_tcp 00:25:40.469 rmmod nvme_fabrics 00:25:40.469 rmmod nvme_keyring 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 397435 ']' 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 397435 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 397435 ']' 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 397435 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 397435 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 397435' 00:25:40.469 killing process with pid 397435 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 397435 00:25:40.469 [2024-05-15 08:37:27.280664] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:40.469 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 397435 00:25:40.728 08:37:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:40.728 08:37:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:40.728 08:37:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:40.728 08:37:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:40.728 08:37:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:40.728 08:37:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.728 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:40.728 08:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.629 08:37:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:42.629 00:25:42.629 real 0m17.655s 00:25:42.629 user 0m22.302s 00:25:42.629 sys 0m5.304s 00:25:42.629 08:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:42.629 08:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.629 ************************************ 00:25:42.629 END TEST nvmf_host_discovery 00:25:42.629 ************************************ 00:25:42.629 08:37:29 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:42.629 08:37:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:42.629 08:37:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:42.629 08:37:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:42.629 ************************************ 00:25:42.629 START TEST nvmf_host_multipath_status 00:25:42.629 ************************************ 00:25:42.629 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:42.888 * Looking for test storage... 00:25:42.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:42.888 08:37:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.150 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:48.151 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:48.151 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:48.151 Found net devices under 0000:86:00.0: cvl_0_0 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:48.151 Found net devices under 0000:86:00.1: cvl_0_1 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:48.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:25:48.151 00:25:48.151 --- 10.0.0.2 ping statistics --- 00:25:48.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.151 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:25:48.151 00:25:48.151 --- 10.0.0.1 ping statistics --- 00:25:48.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.151 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:48.151 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.152 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:48.152 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=402544 00:25:48.152 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 402544 00:25:48.152 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 402544 ']' 00:25:48.152 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.152 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:48.152 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.152 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:48.152 08:37:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.152 [2024-05-15 08:37:34.974331] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:25:48.152 [2024-05-15 08:37:34.974378] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.152 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.152 [2024-05-15 08:37:35.030882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:48.152 [2024-05-15 08:37:35.110529] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.152 [2024-05-15 08:37:35.110565] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.152 [2024-05-15 08:37:35.110572] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.152 [2024-05-15 08:37:35.110578] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.152 [2024-05-15 08:37:35.110584] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.152 [2024-05-15 08:37:35.110630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.152 [2024-05-15 08:37:35.110634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.086 08:37:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:49.086 08:37:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:25:49.086 08:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:49.086 08:37:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:49.086 08:37:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:49.086 08:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.086 08:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=402544 00:25:49.086 08:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:49.086 [2024-05-15 08:37:35.976680] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.086 08:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:49.344 Malloc0 00:25:49.344 08:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:49.344 08:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:49.602 08:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:49.859 [2024-05-15 08:37:36.684463] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:49.859 [2024-05-15 08:37:36.684675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.859 08:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:49.859 [2024-05-15 08:37:36.857094] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:49.859 08:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=403009 00:25:49.859 08:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:49.859 08:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:49.859 08:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 403009 /var/tmp/bdevperf.sock 00:25:49.859 08:37:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 403009 ']' 00:25:49.859 08:37:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:49.859 08:37:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:49.859 08:37:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:49.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:49.859 08:37:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:49.859 08:37:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:50.793 08:37:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:50.793 08:37:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:25:50.793 08:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:51.050 08:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:51.307 Nvme0n1 00:25:51.307 08:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:51.873 Nvme0n1 00:25:51.873 08:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:51.873 08:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:53.771 08:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:53.772 08:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:54.029 08:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:54.029 08:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:55.404 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:55.404 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:55.404 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.404 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:55.404 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.404 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:55.404 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.404 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:55.404 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.404 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:55.404 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.404 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:55.662 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.662 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:55.662 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.662 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:55.920 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.920 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:55.920 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.920 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:56.178 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.178 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:56.178 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.178 08:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:56.178 08:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.178 08:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:56.178 08:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:56.436 08:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:56.694 08:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:57.629 08:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:57.629 08:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:57.629 08:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.629 08:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:57.888 08:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:57.888 08:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:57.888 08:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.888 08:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:57.888 08:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.888 08:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:57.888 08:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.888 08:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:58.147 08:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.147 08:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:58.147 08:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.147 08:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:58.405 08:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.405 08:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:58.405 08:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.405 08:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:58.664 08:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.664 08:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:58.664 08:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.664 08:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:58.664 08:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.664 08:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:58.664 08:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:58.922 08:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:59.180 08:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:00.114 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:00.114 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:00.114 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.114 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:00.372 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.372 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:00.372 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.372 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:00.630 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:00.630 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:00.630 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.630 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:00.630 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.630 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:00.630 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.630 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:00.888 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.888 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:00.888 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.888 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:01.146 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.146 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:01.146 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.146 08:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:01.146 08:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.146 08:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:01.146 08:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:01.404 08:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:01.662 08:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:02.597 08:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:02.597 08:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:02.597 08:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.597 08:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:02.855 08:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.855 08:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:02.855 08:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.855 08:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:03.114 08:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.114 08:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:03.114 08:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.114 08:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:03.114 08:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.114 08:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:03.114 08:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.114 08:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:03.372 08:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.372 08:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:03.372 08:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.372 08:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:03.631 08:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.631 08:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:03.631 08:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.631 08:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:03.631 08:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.631 08:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:03.631 08:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:03.888 08:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:04.146 08:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:05.079 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:05.079 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:05.079 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.079 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.337 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.337 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:05.337 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.337 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.594 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.595 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.595 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.595 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.595 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.595 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:05.595 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.595 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:05.852 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.852 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:05.852 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.852 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:06.109 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.109 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:06.109 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:06.109 08:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.109 08:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.109 08:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:06.109 08:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:06.365 08:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:06.623 08:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:07.558 08:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:07.558 08:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:07.558 08:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.558 08:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:07.816 08:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.817 08:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:07.817 08:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.817 08:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.817 08:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.817 08:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.817 08:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.817 08:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.074 08:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.074 08:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:08.075 08:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.075 08:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:08.332 08:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.332 08:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:08.332 08:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:08.332 08:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.591 08:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.591 08:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:08.591 08:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.591 08:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:08.591 08:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.591 08:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:08.849 08:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:08.849 08:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:09.107 08:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:09.366 08:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:10.301 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:10.301 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:10.301 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.301 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.559 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.559 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:10.559 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.559 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:10.559 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.559 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:10.559 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.559 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:10.817 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.817 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:10.817 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.817 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:11.075 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.075 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:11.075 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.075 08:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:11.075 08:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.075 08:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:11.075 08:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.075 08:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:11.333 08:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.333 08:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:11.333 08:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:11.591 08:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:11.848 08:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:12.781 08:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:12.781 08:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:12.781 08:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.781 08:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:13.039 08:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.039 08:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:13.039 08:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.039 08:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:13.039 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.039 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:13.039 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.039 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:13.297 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.297 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:13.297 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.297 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.555 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.555 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:13.555 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.555 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:13.813 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.813 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:13.813 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.813 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:13.813 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.813 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:13.813 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:14.070 08:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:14.328 08:38:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:15.261 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:15.261 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:15.261 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.261 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:15.518 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.518 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:15.518 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.518 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:15.518 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.518 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:15.776 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.776 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:15.776 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.776 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:15.776 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:15.776 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.034 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.034 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:16.034 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.034 08:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:16.291 08:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.291 08:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:16.291 08:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.291 08:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:16.291 08:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.291 08:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:16.291 08:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:16.549 08:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:16.807 08:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:17.741 08:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:17.741 08:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:17.741 08:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.741 08:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:17.999 08:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.999 08:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:17.999 08:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.999 08:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:18.256 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.256 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:18.256 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.256 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:18.256 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.256 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:18.256 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.256 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:18.514 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.514 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:18.514 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.514 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:18.771 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.771 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:18.771 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:18.771 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.771 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.771 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 403009 00:26:18.772 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 403009 ']' 00:26:18.772 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 403009 00:26:18.772 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:26:18.772 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:18.772 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 403009 00:26:19.032 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:26:19.032 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:26:19.033 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 403009' 00:26:19.033 killing process with pid 403009 00:26:19.033 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 403009 00:26:19.033 08:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 403009 00:26:19.033 Connection closed with partial response: 00:26:19.033 00:26:19.033 00:26:19.033 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 403009 00:26:19.033 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:19.033 [2024-05-15 08:37:36.916565] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:26:19.033 [2024-05-15 08:37:36.916612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid403009 ] 00:26:19.033 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.033 [2024-05-15 08:37:36.966038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.033 [2024-05-15 08:37:37.039363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.033 Running I/O for 90 seconds... 00:26:19.033 [2024-05-15 08:37:50.813995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.814035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.814072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.814082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.814095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.814102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.814115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.814122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.814134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.814141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.814153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.814159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.814176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.814183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.814212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.814219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.814231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.814239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:19.033 [2024-05-15 08:37:50.815841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.033 [2024-05-15 08:37:50.815848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.815862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.815869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.815885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.815892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.815906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.815913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.815927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.815934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.815948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.815956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.815970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.815976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.815991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.815998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.816911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.816921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.816939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.816946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.816962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.816969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.816985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.816993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:19.034 [2024-05-15 08:37:50.817714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.034 [2024-05-15 08:37:50.817721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.817739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.817746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.817762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.817769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.817786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.817793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.817810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.817817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.817833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.817840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.817858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.817864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.817881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.817888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.817904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.817911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.817927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.817934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.817951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.817958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.817976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.817983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.817999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.818006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.818023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.818029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.818046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.818052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.818069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.818076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.818093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.818099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.818116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.818123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.818140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.818147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.818166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.818173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.818190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:37:50.818196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.818213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-05-15 08:37:50.818220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.818238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-05-15 08:37:50.818245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.818262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-05-15 08:37:50.818271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.818288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-05-15 08:37:50.818295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.818312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-05-15 08:37:50.818319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.818336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-05-15 08:37:50.818343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:37:50.818360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-05-15 08:37:50.818367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:38:03.638305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:38:03.638344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:38:03.638377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:38:03.638386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:38:03.638399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:38:03.638406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:38:03.638418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:38:03.638425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:38:03.638437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.035 [2024-05-15 08:38:03.638444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:38:03.638456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-05-15 08:38:03.638463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:38:03.638475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-05-15 08:38:03.638482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:38:03.638495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-05-15 08:38:03.638506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:38:03.638518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-05-15 08:38:03.638525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:38:03.638537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-05-15 08:38:03.638544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:38:03.638556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-05-15 08:38:03.638563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:38:03.638575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-05-15 08:38:03.638581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:19.035 [2024-05-15 08:38:03.638593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.638600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.638612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.638619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.638755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.638764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.638778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.638784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.638796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.638803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.638816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.638823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.638835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.638841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.638853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.638860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.638876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.638883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.638896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.638902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.638915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.638921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.638933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.638940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.638952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.638959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.638971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.036 [2024-05-15 08:38:03.638978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.638990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.638997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.036 [2024-05-15 08:38:03.639149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.639837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.036 [2024-05-15 08:38:03.639844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:19.036 [2024-05-15 08:38:03.640148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.036 [2024-05-15 08:38:03.640158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:19.036 Received shutdown signal, test time was about 27.008306 seconds 00:26:19.036 00:26:19.036 Latency(us) 00:26:19.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.037 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:19.037 Verification LBA range: start 0x0 length 0x4000 00:26:19.037 Nvme0n1 : 27.01 10228.19 39.95 0.00 0.00 12492.94 134.46 3019898.88 00:26:19.037 =================================================================================================================== 00:26:19.037 Total : 10228.19 39.95 0.00 0.00 12492.94 134.46 3019898.88 00:26:19.037 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:19.295 rmmod nvme_tcp 00:26:19.295 rmmod nvme_fabrics 00:26:19.295 rmmod nvme_keyring 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 402544 ']' 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 402544 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 402544 ']' 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 402544 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 402544 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 402544' 00:26:19.295 killing process with pid 402544 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 402544 00:26:19.295 [2024-05-15 08:38:06.312048] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:19.295 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 402544 00:26:19.553 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:19.553 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:19.553 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:19.553 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:19.553 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:19.553 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.553 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.553 08:38:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.086 08:38:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:22.086 00:26:22.086 real 0m38.970s 00:26:22.086 user 1m46.243s 00:26:22.086 sys 0m10.036s 00:26:22.086 08:38:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:22.086 08:38:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:22.086 ************************************ 00:26:22.086 END TEST nvmf_host_multipath_status 00:26:22.086 ************************************ 00:26:22.086 08:38:08 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:22.086 08:38:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:22.086 08:38:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:22.086 08:38:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:22.086 ************************************ 00:26:22.086 START TEST nvmf_discovery_remove_ifc 00:26:22.086 ************************************ 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:22.086 * Looking for test storage... 00:26:22.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:22.086 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:22.087 08:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:27.375 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:27.375 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:27.375 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:27.376 Found net devices under 0000:86:00.0: cvl_0_0 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:27.376 Found net devices under 0000:86:00.1: cvl_0_1 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:27.376 08:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:27.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:26:27.376 00:26:27.376 --- 10.0.0.2 ping statistics --- 00:26:27.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.376 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:27.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:26:27.376 00:26:27.376 --- 10.0.0.1 ping statistics --- 00:26:27.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.376 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=411314 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 411314 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 411314 ']' 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:27.376 08:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.376 [2024-05-15 08:38:14.298482] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:26:27.376 [2024-05-15 08:38:14.298525] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.376 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.376 [2024-05-15 08:38:14.353607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.635 [2024-05-15 08:38:14.430561] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.635 [2024-05-15 08:38:14.430593] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.635 [2024-05-15 08:38:14.430600] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.635 [2024-05-15 08:38:14.430606] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.635 [2024-05-15 08:38:14.430612] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.635 [2024-05-15 08:38:14.430643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.201 [2024-05-15 08:38:15.137979] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.201 [2024-05-15 08:38:15.145957] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:28.201 [2024-05-15 08:38:15.146122] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:28.201 null0 00:26:28.201 [2024-05-15 08:38:15.178110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=411556 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 411556 /tmp/host.sock 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 411556 ']' 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:28.201 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:28.201 08:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.459 [2024-05-15 08:38:15.239147] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:26:28.459 [2024-05-15 08:38:15.239191] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid411556 ] 00:26:28.459 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.459 [2024-05-15 08:38:15.292052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.459 [2024-05-15 08:38:15.364406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.024 08:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:29.024 08:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:26:29.024 08:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:29.024 08:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:29.024 08:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.024 08:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.024 08:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.024 08:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:29.024 08:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.024 08:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.282 08:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.282 08:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:29.282 08:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.282 08:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.215 [2024-05-15 08:38:17.176687] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:30.215 [2024-05-15 08:38:17.176709] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:30.215 [2024-05-15 08:38:17.176721] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:30.473 [2024-05-15 08:38:17.305119] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:30.473 [2024-05-15 08:38:17.407170] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:30.473 [2024-05-15 08:38:17.407215] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:30.473 [2024-05-15 08:38:17.407236] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:30.473 [2024-05-15 08:38:17.407248] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:30.473 [2024-05-15 08:38:17.407266] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:30.473 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.473 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:30.473 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.473 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.473 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.473 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.473 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.473 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.473 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.473 [2024-05-15 08:38:17.414913] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x16127f0 was disconnected and freed. delete nvme_qpair. 00:26:30.473 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.473 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:30.473 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:30.473 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:30.731 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:30.731 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.731 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.731 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.731 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.731 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.731 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.731 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.731 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.731 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:30.731 08:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:31.665 08:38:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:31.665 08:38:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.665 08:38:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:31.665 08:38:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:31.665 08:38:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.665 08:38:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:31.665 08:38:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.665 08:38:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.665 08:38:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:31.665 08:38:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:33.038 08:38:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.038 08:38:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.038 08:38:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.038 08:38:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.038 08:38:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:33.038 08:38:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.038 08:38:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:33.038 08:38:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.038 08:38:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:33.038 08:38:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:33.971 08:38:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.971 08:38:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.971 08:38:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:33.971 08:38:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.971 08:38:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.971 08:38:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:33.971 08:38:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.971 08:38:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.971 08:38:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:33.971 08:38:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:34.904 08:38:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:34.904 08:38:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.904 08:38:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:34.904 08:38:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:34.904 08:38:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.904 08:38:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:34.904 08:38:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.904 08:38:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.904 08:38:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:34.904 08:38:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:35.837 08:38:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:35.837 08:38:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.837 08:38:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:35.837 08:38:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.837 08:38:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:35.837 08:38:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.837 08:38:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:35.837 08:38:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.837 [2024-05-15 08:38:22.848537] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:35.837 [2024-05-15 08:38:22.848574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.837 [2024-05-15 08:38:22.848584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.837 [2024-05-15 08:38:22.848609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.837 [2024-05-15 08:38:22.848616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.837 [2024-05-15 08:38:22.848624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.837 [2024-05-15 08:38:22.848634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.837 [2024-05-15 08:38:22.848641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.837 [2024-05-15 08:38:22.848648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.837 [2024-05-15 08:38:22.848655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.837 [2024-05-15 08:38:22.848662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.837 [2024-05-15 08:38:22.848668] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d9920 is same with the state(5) to be set 00:26:35.837 [2024-05-15 08:38:22.858560] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d9920 (9): Bad file descriptor 00:26:36.095 08:38:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:36.095 08:38:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:36.095 [2024-05-15 08:38:22.868599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:37.026 08:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:37.026 08:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:37.026 08:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:37.026 08:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.026 08:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:37.026 08:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.026 08:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.026 [2024-05-15 08:38:23.900184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:37.962 [2024-05-15 08:38:24.924181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:37.962 [2024-05-15 08:38:24.924221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d9920 with addr=10.0.0.2, port=4420 00:26:37.962 [2024-05-15 08:38:24.924235] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d9920 is same with the state(5) to be set 00:26:37.962 [2024-05-15 08:38:24.924623] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d9920 (9): Bad file descriptor 00:26:37.962 [2024-05-15 08:38:24.924648] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.962 [2024-05-15 08:38:24.924672] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:37.962 [2024-05-15 08:38:24.924694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.962 [2024-05-15 08:38:24.924706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.962 [2024-05-15 08:38:24.924717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.962 [2024-05-15 08:38:24.924727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.962 [2024-05-15 08:38:24.924736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.962 [2024-05-15 08:38:24.924746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.962 [2024-05-15 08:38:24.924760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.962 [2024-05-15 08:38:24.924769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.962 [2024-05-15 08:38:24.924779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.962 [2024-05-15 08:38:24.924788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.962 [2024-05-15 08:38:24.924797] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:37.962 [2024-05-15 08:38:24.925259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d8d50 (9): Bad file descriptor 00:26:37.962 [2024-05-15 08:38:24.926270] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:37.962 [2024-05-15 08:38:24.926283] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:37.962 08:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.962 08:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:37.962 08:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:39.337 08:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:39.337 08:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.337 08:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:39.337 08:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.337 08:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:39.337 08:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.337 08:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.337 08:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.337 08:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:39.337 08:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.337 08:38:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.337 08:38:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:39.337 08:38:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:39.337 08:38:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.337 08:38:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.337 08:38:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:39.337 08:38:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.337 08:38:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:39.337 08:38:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.337 08:38:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.337 08:38:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:39.337 08:38:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.272 [2024-05-15 08:38:26.982742] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:40.272 [2024-05-15 08:38:26.982763] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:40.272 [2024-05-15 08:38:26.982778] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:40.272 [2024-05-15 08:38:27.113183] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:40.272 08:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.272 08:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.272 08:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.272 08:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.272 08:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.272 08:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.272 08:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.272 08:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.272 [2024-05-15 08:38:27.171283] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:40.272 [2024-05-15 08:38:27.171319] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:40.272 [2024-05-15 08:38:27.171336] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:40.272 [2024-05-15 08:38:27.171350] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:40.272 [2024-05-15 08:38:27.171357] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:40.272 08:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:40.272 08:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.272 [2024-05-15 08:38:27.180043] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15c72d0 was disconnected and freed. delete nvme_qpair. 00:26:41.205 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.205 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.205 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.205 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.205 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.205 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.205 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.205 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.205 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 411556 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 411556 ']' 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 411556 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 411556 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 411556' 00:26:41.464 killing process with pid 411556 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 411556 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 411556 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:41.464 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:41.464 rmmod nvme_tcp 00:26:41.723 rmmod nvme_fabrics 00:26:41.723 rmmod nvme_keyring 00:26:41.723 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:41.723 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:41.723 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:41.723 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 411314 ']' 00:26:41.723 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 411314 00:26:41.723 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 411314 ']' 00:26:41.723 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 411314 00:26:41.723 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:26:41.723 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:41.723 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 411314 00:26:41.723 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:41.723 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:41.723 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 411314' 00:26:41.723 killing process with pid 411314 00:26:41.723 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 411314 00:26:41.723 [2024-05-15 08:38:28.576748] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:41.723 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 411314 00:26:41.982 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:41.982 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:41.982 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:41.982 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:41.982 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:41.982 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.982 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:41.982 08:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.885 08:38:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:43.885 00:26:43.885 real 0m22.165s 00:26:43.885 user 0m27.812s 00:26:43.885 sys 0m5.269s 00:26:43.885 08:38:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:43.885 08:38:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.885 ************************************ 00:26:43.885 END TEST nvmf_discovery_remove_ifc 00:26:43.885 ************************************ 00:26:43.885 08:38:30 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:43.885 08:38:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:43.885 08:38:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:43.885 08:38:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:44.144 ************************************ 00:26:44.144 START TEST nvmf_identify_kernel_target 00:26:44.144 ************************************ 00:26:44.144 08:38:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:44.144 * Looking for test storage... 00:26:44.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:44.144 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:44.145 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:44.145 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:44.145 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:44.145 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.145 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:44.145 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:44.145 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:44.145 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.145 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:44.145 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.145 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:44.145 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:44.145 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:44.145 08:38:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:49.405 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:49.406 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:49.406 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:49.406 Found net devices under 0000:86:00.0: cvl_0_0 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:49.406 Found net devices under 0000:86:00.1: cvl_0_1 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:49.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:26:49.406 00:26:49.406 --- 10.0.0.2 ping statistics --- 00:26:49.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.406 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:49.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:26:49.406 00:26:49.406 --- 10.0.0.1 ping statistics --- 00:26:49.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.406 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:49.406 08:38:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@728 -- # local ip 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:49.407 08:38:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:51.306 Waiting for block devices as requested 00:26:51.306 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:51.306 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:51.565 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:51.565 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:51.565 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:51.565 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:51.823 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:51.823 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:51.823 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:51.823 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:52.082 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:52.082 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:52.082 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:52.341 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:52.341 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:52.341 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:52.599 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:52.599 No valid GPT data, bailing 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:52.599 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:52.600 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:52.600 00:26:52.600 Discovery Log Number of Records 2, Generation counter 2 00:26:52.600 =====Discovery Log Entry 0====== 00:26:52.600 trtype: tcp 00:26:52.600 adrfam: ipv4 00:26:52.600 subtype: current discovery subsystem 00:26:52.600 treq: not specified, sq flow control disable supported 00:26:52.600 portid: 1 00:26:52.600 trsvcid: 4420 00:26:52.600 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:52.600 traddr: 10.0.0.1 00:26:52.600 eflags: none 00:26:52.600 sectype: none 00:26:52.600 =====Discovery Log Entry 1====== 00:26:52.600 trtype: tcp 00:26:52.600 adrfam: ipv4 00:26:52.600 subtype: nvme subsystem 00:26:52.600 treq: not specified, sq flow control disable supported 00:26:52.600 portid: 1 00:26:52.600 trsvcid: 4420 00:26:52.600 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:52.600 traddr: 10.0.0.1 00:26:52.600 eflags: none 00:26:52.600 sectype: none 00:26:52.600 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:52.600 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:52.858 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.858 ===================================================== 00:26:52.858 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:52.858 ===================================================== 00:26:52.858 Controller Capabilities/Features 00:26:52.858 ================================ 00:26:52.858 Vendor ID: 0000 00:26:52.858 Subsystem Vendor ID: 0000 00:26:52.859 Serial Number: 4daef9c7dc324d4c79b3 00:26:52.859 Model Number: Linux 00:26:52.859 Firmware Version: 6.7.0-68 00:26:52.859 Recommended Arb Burst: 0 00:26:52.859 IEEE OUI Identifier: 00 00 00 00:26:52.859 Multi-path I/O 00:26:52.859 May have multiple subsystem ports: No 00:26:52.859 May have multiple controllers: No 00:26:52.859 Associated with SR-IOV VF: No 00:26:52.859 Max Data Transfer Size: Unlimited 00:26:52.859 Max Number of Namespaces: 0 00:26:52.859 Max Number of I/O Queues: 1024 00:26:52.859 NVMe Specification Version (VS): 1.3 00:26:52.859 NVMe Specification Version (Identify): 1.3 00:26:52.859 Maximum Queue Entries: 1024 00:26:52.859 Contiguous Queues Required: No 00:26:52.859 Arbitration Mechanisms Supported 00:26:52.859 Weighted Round Robin: Not Supported 00:26:52.859 Vendor Specific: Not Supported 00:26:52.859 Reset Timeout: 7500 ms 00:26:52.859 Doorbell Stride: 4 bytes 00:26:52.859 NVM Subsystem Reset: Not Supported 00:26:52.859 Command Sets Supported 00:26:52.859 NVM Command Set: Supported 00:26:52.859 Boot Partition: Not Supported 00:26:52.859 Memory Page Size Minimum: 4096 bytes 00:26:52.859 Memory Page Size Maximum: 4096 bytes 00:26:52.859 Persistent Memory Region: Not Supported 00:26:52.859 Optional Asynchronous Events Supported 00:26:52.859 Namespace Attribute Notices: Not Supported 00:26:52.859 Firmware Activation Notices: Not Supported 00:26:52.859 ANA Change Notices: Not Supported 00:26:52.859 PLE Aggregate Log Change Notices: Not Supported 00:26:52.859 LBA Status Info Alert Notices: Not Supported 00:26:52.859 EGE Aggregate Log Change Notices: Not Supported 00:26:52.859 Normal NVM Subsystem Shutdown event: Not Supported 00:26:52.859 Zone Descriptor Change Notices: Not Supported 00:26:52.859 Discovery Log Change Notices: Supported 00:26:52.859 Controller Attributes 00:26:52.859 128-bit Host Identifier: Not Supported 00:26:52.859 Non-Operational Permissive Mode: Not Supported 00:26:52.859 NVM Sets: Not Supported 00:26:52.859 Read Recovery Levels: Not Supported 00:26:52.859 Endurance Groups: Not Supported 00:26:52.859 Predictable Latency Mode: Not Supported 00:26:52.859 Traffic Based Keep ALive: Not Supported 00:26:52.859 Namespace Granularity: Not Supported 00:26:52.859 SQ Associations: Not Supported 00:26:52.859 UUID List: Not Supported 00:26:52.859 Multi-Domain Subsystem: Not Supported 00:26:52.859 Fixed Capacity Management: Not Supported 00:26:52.859 Variable Capacity Management: Not Supported 00:26:52.859 Delete Endurance Group: Not Supported 00:26:52.859 Delete NVM Set: Not Supported 00:26:52.859 Extended LBA Formats Supported: Not Supported 00:26:52.859 Flexible Data Placement Supported: Not Supported 00:26:52.859 00:26:52.859 Controller Memory Buffer Support 00:26:52.859 ================================ 00:26:52.859 Supported: No 00:26:52.859 00:26:52.859 Persistent Memory Region Support 00:26:52.859 ================================ 00:26:52.859 Supported: No 00:26:52.859 00:26:52.859 Admin Command Set Attributes 00:26:52.859 ============================ 00:26:52.859 Security Send/Receive: Not Supported 00:26:52.859 Format NVM: Not Supported 00:26:52.859 Firmware Activate/Download: Not Supported 00:26:52.859 Namespace Management: Not Supported 00:26:52.859 Device Self-Test: Not Supported 00:26:52.859 Directives: Not Supported 00:26:52.859 NVMe-MI: Not Supported 00:26:52.859 Virtualization Management: Not Supported 00:26:52.859 Doorbell Buffer Config: Not Supported 00:26:52.859 Get LBA Status Capability: Not Supported 00:26:52.859 Command & Feature Lockdown Capability: Not Supported 00:26:52.859 Abort Command Limit: 1 00:26:52.859 Async Event Request Limit: 1 00:26:52.859 Number of Firmware Slots: N/A 00:26:52.859 Firmware Slot 1 Read-Only: N/A 00:26:52.859 Firmware Activation Without Reset: N/A 00:26:52.859 Multiple Update Detection Support: N/A 00:26:52.859 Firmware Update Granularity: No Information Provided 00:26:52.859 Per-Namespace SMART Log: No 00:26:52.859 Asymmetric Namespace Access Log Page: Not Supported 00:26:52.859 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:52.859 Command Effects Log Page: Not Supported 00:26:52.859 Get Log Page Extended Data: Supported 00:26:52.859 Telemetry Log Pages: Not Supported 00:26:52.859 Persistent Event Log Pages: Not Supported 00:26:52.859 Supported Log Pages Log Page: May Support 00:26:52.859 Commands Supported & Effects Log Page: Not Supported 00:26:52.859 Feature Identifiers & Effects Log Page:May Support 00:26:52.859 NVMe-MI Commands & Effects Log Page: May Support 00:26:52.859 Data Area 4 for Telemetry Log: Not Supported 00:26:52.859 Error Log Page Entries Supported: 1 00:26:52.859 Keep Alive: Not Supported 00:26:52.859 00:26:52.859 NVM Command Set Attributes 00:26:52.859 ========================== 00:26:52.859 Submission Queue Entry Size 00:26:52.859 Max: 1 00:26:52.859 Min: 1 00:26:52.859 Completion Queue Entry Size 00:26:52.859 Max: 1 00:26:52.859 Min: 1 00:26:52.859 Number of Namespaces: 0 00:26:52.859 Compare Command: Not Supported 00:26:52.859 Write Uncorrectable Command: Not Supported 00:26:52.859 Dataset Management Command: Not Supported 00:26:52.859 Write Zeroes Command: Not Supported 00:26:52.859 Set Features Save Field: Not Supported 00:26:52.859 Reservations: Not Supported 00:26:52.859 Timestamp: Not Supported 00:26:52.859 Copy: Not Supported 00:26:52.859 Volatile Write Cache: Not Present 00:26:52.859 Atomic Write Unit (Normal): 1 00:26:52.859 Atomic Write Unit (PFail): 1 00:26:52.859 Atomic Compare & Write Unit: 1 00:26:52.859 Fused Compare & Write: Not Supported 00:26:52.859 Scatter-Gather List 00:26:52.859 SGL Command Set: Supported 00:26:52.859 SGL Keyed: Not Supported 00:26:52.859 SGL Bit Bucket Descriptor: Not Supported 00:26:52.859 SGL Metadata Pointer: Not Supported 00:26:52.859 Oversized SGL: Not Supported 00:26:52.859 SGL Metadata Address: Not Supported 00:26:52.859 SGL Offset: Supported 00:26:52.859 Transport SGL Data Block: Not Supported 00:26:52.859 Replay Protected Memory Block: Not Supported 00:26:52.859 00:26:52.859 Firmware Slot Information 00:26:52.859 ========================= 00:26:52.859 Active slot: 0 00:26:52.859 00:26:52.859 00:26:52.859 Error Log 00:26:52.859 ========= 00:26:52.859 00:26:52.859 Active Namespaces 00:26:52.859 ================= 00:26:52.859 Discovery Log Page 00:26:52.859 ================== 00:26:52.859 Generation Counter: 2 00:26:52.859 Number of Records: 2 00:26:52.859 Record Format: 0 00:26:52.859 00:26:52.859 Discovery Log Entry 0 00:26:52.859 ---------------------- 00:26:52.859 Transport Type: 3 (TCP) 00:26:52.859 Address Family: 1 (IPv4) 00:26:52.859 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:52.859 Entry Flags: 00:26:52.859 Duplicate Returned Information: 0 00:26:52.859 Explicit Persistent Connection Support for Discovery: 0 00:26:52.859 Transport Requirements: 00:26:52.859 Secure Channel: Not Specified 00:26:52.859 Port ID: 1 (0x0001) 00:26:52.859 Controller ID: 65535 (0xffff) 00:26:52.859 Admin Max SQ Size: 32 00:26:52.859 Transport Service Identifier: 4420 00:26:52.859 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:52.859 Transport Address: 10.0.0.1 00:26:52.859 Discovery Log Entry 1 00:26:52.859 ---------------------- 00:26:52.859 Transport Type: 3 (TCP) 00:26:52.859 Address Family: 1 (IPv4) 00:26:52.859 Subsystem Type: 2 (NVM Subsystem) 00:26:52.859 Entry Flags: 00:26:52.859 Duplicate Returned Information: 0 00:26:52.859 Explicit Persistent Connection Support for Discovery: 0 00:26:52.859 Transport Requirements: 00:26:52.859 Secure Channel: Not Specified 00:26:52.859 Port ID: 1 (0x0001) 00:26:52.859 Controller ID: 65535 (0xffff) 00:26:52.859 Admin Max SQ Size: 32 00:26:52.859 Transport Service Identifier: 4420 00:26:52.859 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:52.859 Transport Address: 10.0.0.1 00:26:52.859 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:52.859 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.859 get_feature(0x01) failed 00:26:52.859 get_feature(0x02) failed 00:26:52.859 get_feature(0x04) failed 00:26:52.859 ===================================================== 00:26:52.859 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:52.859 ===================================================== 00:26:52.859 Controller Capabilities/Features 00:26:52.859 ================================ 00:26:52.859 Vendor ID: 0000 00:26:52.859 Subsystem Vendor ID: 0000 00:26:52.859 Serial Number: b69cfdfdc821b4f6f219 00:26:52.860 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:52.860 Firmware Version: 6.7.0-68 00:26:52.860 Recommended Arb Burst: 6 00:26:52.860 IEEE OUI Identifier: 00 00 00 00:26:52.860 Multi-path I/O 00:26:52.860 May have multiple subsystem ports: Yes 00:26:52.860 May have multiple controllers: Yes 00:26:52.860 Associated with SR-IOV VF: No 00:26:52.860 Max Data Transfer Size: Unlimited 00:26:52.860 Max Number of Namespaces: 1024 00:26:52.860 Max Number of I/O Queues: 128 00:26:52.860 NVMe Specification Version (VS): 1.3 00:26:52.860 NVMe Specification Version (Identify): 1.3 00:26:52.860 Maximum Queue Entries: 1024 00:26:52.860 Contiguous Queues Required: No 00:26:52.860 Arbitration Mechanisms Supported 00:26:52.860 Weighted Round Robin: Not Supported 00:26:52.860 Vendor Specific: Not Supported 00:26:52.860 Reset Timeout: 7500 ms 00:26:52.860 Doorbell Stride: 4 bytes 00:26:52.860 NVM Subsystem Reset: Not Supported 00:26:52.860 Command Sets Supported 00:26:52.860 NVM Command Set: Supported 00:26:52.860 Boot Partition: Not Supported 00:26:52.860 Memory Page Size Minimum: 4096 bytes 00:26:52.860 Memory Page Size Maximum: 4096 bytes 00:26:52.860 Persistent Memory Region: Not Supported 00:26:52.860 Optional Asynchronous Events Supported 00:26:52.860 Namespace Attribute Notices: Supported 00:26:52.860 Firmware Activation Notices: Not Supported 00:26:52.860 ANA Change Notices: Supported 00:26:52.860 PLE Aggregate Log Change Notices: Not Supported 00:26:52.860 LBA Status Info Alert Notices: Not Supported 00:26:52.860 EGE Aggregate Log Change Notices: Not Supported 00:26:52.860 Normal NVM Subsystem Shutdown event: Not Supported 00:26:52.860 Zone Descriptor Change Notices: Not Supported 00:26:52.860 Discovery Log Change Notices: Not Supported 00:26:52.860 Controller Attributes 00:26:52.860 128-bit Host Identifier: Supported 00:26:52.860 Non-Operational Permissive Mode: Not Supported 00:26:52.860 NVM Sets: Not Supported 00:26:52.860 Read Recovery Levels: Not Supported 00:26:52.860 Endurance Groups: Not Supported 00:26:52.860 Predictable Latency Mode: Not Supported 00:26:52.860 Traffic Based Keep ALive: Supported 00:26:52.860 Namespace Granularity: Not Supported 00:26:52.860 SQ Associations: Not Supported 00:26:52.860 UUID List: Not Supported 00:26:52.860 Multi-Domain Subsystem: Not Supported 00:26:52.860 Fixed Capacity Management: Not Supported 00:26:52.860 Variable Capacity Management: Not Supported 00:26:52.860 Delete Endurance Group: Not Supported 00:26:52.860 Delete NVM Set: Not Supported 00:26:52.860 Extended LBA Formats Supported: Not Supported 00:26:52.860 Flexible Data Placement Supported: Not Supported 00:26:52.860 00:26:52.860 Controller Memory Buffer Support 00:26:52.860 ================================ 00:26:52.860 Supported: No 00:26:52.860 00:26:52.860 Persistent Memory Region Support 00:26:52.860 ================================ 00:26:52.860 Supported: No 00:26:52.860 00:26:52.860 Admin Command Set Attributes 00:26:52.860 ============================ 00:26:52.860 Security Send/Receive: Not Supported 00:26:52.860 Format NVM: Not Supported 00:26:52.860 Firmware Activate/Download: Not Supported 00:26:52.860 Namespace Management: Not Supported 00:26:52.860 Device Self-Test: Not Supported 00:26:52.860 Directives: Not Supported 00:26:52.860 NVMe-MI: Not Supported 00:26:52.860 Virtualization Management: Not Supported 00:26:52.860 Doorbell Buffer Config: Not Supported 00:26:52.860 Get LBA Status Capability: Not Supported 00:26:52.860 Command & Feature Lockdown Capability: Not Supported 00:26:52.860 Abort Command Limit: 4 00:26:52.860 Async Event Request Limit: 4 00:26:52.860 Number of Firmware Slots: N/A 00:26:52.860 Firmware Slot 1 Read-Only: N/A 00:26:52.860 Firmware Activation Without Reset: N/A 00:26:52.860 Multiple Update Detection Support: N/A 00:26:52.860 Firmware Update Granularity: No Information Provided 00:26:52.860 Per-Namespace SMART Log: Yes 00:26:52.860 Asymmetric Namespace Access Log Page: Supported 00:26:52.860 ANA Transition Time : 10 sec 00:26:52.860 00:26:52.860 Asymmetric Namespace Access Capabilities 00:26:52.860 ANA Optimized State : Supported 00:26:52.860 ANA Non-Optimized State : Supported 00:26:52.860 ANA Inaccessible State : Supported 00:26:52.860 ANA Persistent Loss State : Supported 00:26:52.860 ANA Change State : Supported 00:26:52.860 ANAGRPID is not changed : No 00:26:52.860 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:52.860 00:26:52.860 ANA Group Identifier Maximum : 128 00:26:52.860 Number of ANA Group Identifiers : 128 00:26:52.860 Max Number of Allowed Namespaces : 1024 00:26:52.860 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:52.860 Command Effects Log Page: Supported 00:26:52.860 Get Log Page Extended Data: Supported 00:26:52.860 Telemetry Log Pages: Not Supported 00:26:52.860 Persistent Event Log Pages: Not Supported 00:26:52.860 Supported Log Pages Log Page: May Support 00:26:52.860 Commands Supported & Effects Log Page: Not Supported 00:26:52.860 Feature Identifiers & Effects Log Page:May Support 00:26:52.860 NVMe-MI Commands & Effects Log Page: May Support 00:26:52.860 Data Area 4 for Telemetry Log: Not Supported 00:26:52.860 Error Log Page Entries Supported: 128 00:26:52.860 Keep Alive: Supported 00:26:52.860 Keep Alive Granularity: 1000 ms 00:26:52.860 00:26:52.860 NVM Command Set Attributes 00:26:52.860 ========================== 00:26:52.860 Submission Queue Entry Size 00:26:52.860 Max: 64 00:26:52.860 Min: 64 00:26:52.860 Completion Queue Entry Size 00:26:52.860 Max: 16 00:26:52.860 Min: 16 00:26:52.860 Number of Namespaces: 1024 00:26:52.860 Compare Command: Not Supported 00:26:52.860 Write Uncorrectable Command: Not Supported 00:26:52.860 Dataset Management Command: Supported 00:26:52.860 Write Zeroes Command: Supported 00:26:52.860 Set Features Save Field: Not Supported 00:26:52.860 Reservations: Not Supported 00:26:52.860 Timestamp: Not Supported 00:26:52.860 Copy: Not Supported 00:26:52.860 Volatile Write Cache: Present 00:26:52.860 Atomic Write Unit (Normal): 1 00:26:52.860 Atomic Write Unit (PFail): 1 00:26:52.860 Atomic Compare & Write Unit: 1 00:26:52.860 Fused Compare & Write: Not Supported 00:26:52.860 Scatter-Gather List 00:26:52.860 SGL Command Set: Supported 00:26:52.860 SGL Keyed: Not Supported 00:26:52.860 SGL Bit Bucket Descriptor: Not Supported 00:26:52.860 SGL Metadata Pointer: Not Supported 00:26:52.860 Oversized SGL: Not Supported 00:26:52.860 SGL Metadata Address: Not Supported 00:26:52.860 SGL Offset: Supported 00:26:52.860 Transport SGL Data Block: Not Supported 00:26:52.860 Replay Protected Memory Block: Not Supported 00:26:52.860 00:26:52.860 Firmware Slot Information 00:26:52.860 ========================= 00:26:52.860 Active slot: 0 00:26:52.860 00:26:52.860 Asymmetric Namespace Access 00:26:52.860 =========================== 00:26:52.860 Change Count : 0 00:26:52.860 Number of ANA Group Descriptors : 1 00:26:52.860 ANA Group Descriptor : 0 00:26:52.860 ANA Group ID : 1 00:26:52.860 Number of NSID Values : 1 00:26:52.860 Change Count : 0 00:26:52.860 ANA State : 1 00:26:52.860 Namespace Identifier : 1 00:26:52.860 00:26:52.860 Commands Supported and Effects 00:26:52.860 ============================== 00:26:52.860 Admin Commands 00:26:52.860 -------------- 00:26:52.860 Get Log Page (02h): Supported 00:26:52.860 Identify (06h): Supported 00:26:52.860 Abort (08h): Supported 00:26:52.860 Set Features (09h): Supported 00:26:52.860 Get Features (0Ah): Supported 00:26:52.860 Asynchronous Event Request (0Ch): Supported 00:26:52.860 Keep Alive (18h): Supported 00:26:52.860 I/O Commands 00:26:52.860 ------------ 00:26:52.860 Flush (00h): Supported 00:26:52.860 Write (01h): Supported LBA-Change 00:26:52.860 Read (02h): Supported 00:26:52.860 Write Zeroes (08h): Supported LBA-Change 00:26:52.860 Dataset Management (09h): Supported 00:26:52.860 00:26:52.860 Error Log 00:26:52.860 ========= 00:26:52.860 Entry: 0 00:26:52.860 Error Count: 0x3 00:26:52.860 Submission Queue Id: 0x0 00:26:52.860 Command Id: 0x5 00:26:52.860 Phase Bit: 0 00:26:52.860 Status Code: 0x2 00:26:52.860 Status Code Type: 0x0 00:26:52.860 Do Not Retry: 1 00:26:52.860 Error Location: 0x28 00:26:52.860 LBA: 0x0 00:26:52.860 Namespace: 0x0 00:26:52.860 Vendor Log Page: 0x0 00:26:52.860 ----------- 00:26:52.860 Entry: 1 00:26:52.860 Error Count: 0x2 00:26:52.860 Submission Queue Id: 0x0 00:26:52.860 Command Id: 0x5 00:26:52.860 Phase Bit: 0 00:26:52.860 Status Code: 0x2 00:26:52.860 Status Code Type: 0x0 00:26:52.861 Do Not Retry: 1 00:26:52.861 Error Location: 0x28 00:26:52.861 LBA: 0x0 00:26:52.861 Namespace: 0x0 00:26:52.861 Vendor Log Page: 0x0 00:26:52.861 ----------- 00:26:52.861 Entry: 2 00:26:52.861 Error Count: 0x1 00:26:52.861 Submission Queue Id: 0x0 00:26:52.861 Command Id: 0x4 00:26:52.861 Phase Bit: 0 00:26:52.861 Status Code: 0x2 00:26:52.861 Status Code Type: 0x0 00:26:52.861 Do Not Retry: 1 00:26:52.861 Error Location: 0x28 00:26:52.861 LBA: 0x0 00:26:52.861 Namespace: 0x0 00:26:52.861 Vendor Log Page: 0x0 00:26:52.861 00:26:52.861 Number of Queues 00:26:52.861 ================ 00:26:52.861 Number of I/O Submission Queues: 128 00:26:52.861 Number of I/O Completion Queues: 128 00:26:52.861 00:26:52.861 ZNS Specific Controller Data 00:26:52.861 ============================ 00:26:52.861 Zone Append Size Limit: 0 00:26:52.861 00:26:52.861 00:26:52.861 Active Namespaces 00:26:52.861 ================= 00:26:52.861 get_feature(0x05) failed 00:26:52.861 Namespace ID:1 00:26:52.861 Command Set Identifier: NVM (00h) 00:26:52.861 Deallocate: Supported 00:26:52.861 Deallocated/Unwritten Error: Not Supported 00:26:52.861 Deallocated Read Value: Unknown 00:26:52.861 Deallocate in Write Zeroes: Not Supported 00:26:52.861 Deallocated Guard Field: 0xFFFF 00:26:52.861 Flush: Supported 00:26:52.861 Reservation: Not Supported 00:26:52.861 Namespace Sharing Capabilities: Multiple Controllers 00:26:52.861 Size (in LBAs): 1953525168 (931GiB) 00:26:52.861 Capacity (in LBAs): 1953525168 (931GiB) 00:26:52.861 Utilization (in LBAs): 1953525168 (931GiB) 00:26:52.861 UUID: 2fb419be-d364-42dc-9c0a-1508adb06502 00:26:52.861 Thin Provisioning: Not Supported 00:26:52.861 Per-NS Atomic Units: Yes 00:26:52.861 Atomic Boundary Size (Normal): 0 00:26:52.861 Atomic Boundary Size (PFail): 0 00:26:52.861 Atomic Boundary Offset: 0 00:26:52.861 NGUID/EUI64 Never Reused: No 00:26:52.861 ANA group ID: 1 00:26:52.861 Namespace Write Protected: No 00:26:52.861 Number of LBA Formats: 1 00:26:52.861 Current LBA Format: LBA Format #00 00:26:52.861 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:52.861 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:52.861 rmmod nvme_tcp 00:26:52.861 rmmod nvme_fabrics 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:52.861 08:38:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.394 08:38:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:55.394 08:38:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:55.394 08:38:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:55.394 08:38:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:55.394 08:38:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:55.394 08:38:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:55.394 08:38:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:55.394 08:38:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:55.394 08:38:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:55.394 08:38:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:55.395 08:38:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:57.925 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:57.925 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:57.925 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:57.925 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:57.925 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:57.925 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:57.925 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:57.925 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:57.925 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:57.925 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:57.925 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:57.925 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:57.925 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:57.925 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:57.925 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:57.925 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:58.863 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:58.863 00:26:58.863 real 0m14.749s 00:26:58.863 user 0m3.387s 00:26:58.863 sys 0m7.537s 00:26:58.863 08:38:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:58.863 08:38:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:58.863 ************************************ 00:26:58.863 END TEST nvmf_identify_kernel_target 00:26:58.863 ************************************ 00:26:58.863 08:38:45 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:58.863 08:38:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:58.863 08:38:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:58.863 08:38:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:58.863 ************************************ 00:26:58.863 START TEST nvmf_auth 00:26:58.863 ************************************ 00:26:58.863 08:38:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:58.863 * Looking for test storage... 00:26:58.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:58.863 08:38:45 nvmf_tcp.nvmf_auth -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.863 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # uname -s 00:26:58.863 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.863 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.863 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.863 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.863 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.863 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.863 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.863 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.863 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.863 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.863 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:58.863 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:58.863 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.863 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- paths/export.sh@5 -- # export PATH 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@47 -- # : 0 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # keys=() 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # ckeys=() 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- host/auth.sh@81 -- # nvmftestinit 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@285 -- # xtrace_disable 00:26:58.864 08:38:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # pci_devs=() 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # net_devs=() 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # e810=() 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # local -ga e810 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # x722=() 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # local -ga x722 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # mlx=() 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # local -ga mlx 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:04.133 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:04.133 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.133 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:04.134 Found net devices under 0000:86:00.0: cvl_0_0 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:04.134 Found net devices under 0000:86:00.1: cvl_0_1 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # is_hw=yes 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.134 08:38:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:04.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:27:04.134 00:27:04.134 --- 10.0.0.2 ping statistics --- 00:27:04.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.134 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:27:04.134 00:27:04.134 --- 10.0.0.1 ping statistics --- 00:27:04.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.134 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@422 -- # return 0 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # nvmfappstart -L nvme_auth 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@481 -- # nvmfpid=423292 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@482 -- # waitforlisten 423292 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 423292 ']' 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:04.134 08:38:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:05.071 08:38:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:05.072 08:38:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:27:05.072 08:38:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:05.072 08:38:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:05.072 08:38:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@83 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key null 32 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=139f03a034dd8712f78b3a1c749b2be7 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.8fm 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 139f03a034dd8712f78b3a1c749b2be7 0 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 139f03a034dd8712f78b3a1c749b2be7 0 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=139f03a034dd8712f78b3a1c749b2be7 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.8fm 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.8fm 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # keys[0]=/tmp/spdk.key-null.8fm 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key sha512 64 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=c141ded9a92f3e90b96a9365ebb442ae451d05aeebb997ee1b4d097297ee981c 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.F2m 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key c141ded9a92f3e90b96a9365ebb442ae451d05aeebb997ee1b4d097297ee981c 3 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 c141ded9a92f3e90b96a9365ebb442ae451d05aeebb997ee1b4d097297ee981c 3 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=c141ded9a92f3e90b96a9365ebb442ae451d05aeebb997ee1b4d097297ee981c 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:27:05.072 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:27:05.331 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.F2m 00:27:05.331 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.F2m 00:27:05.331 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # ckeys[0]=/tmp/spdk.key-sha512.F2m 00:27:05.331 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key null 48 00:27:05.331 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:27:05.331 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:05.331 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:27:05.331 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:27:05.331 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=118a8e11fd75b4635a2e6250ba743ed51b29ef4b937f9fad 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.Aos 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 118a8e11fd75b4635a2e6250ba743ed51b29ef4b937f9fad 0 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 118a8e11fd75b4635a2e6250ba743ed51b29ef4b937f9fad 0 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=118a8e11fd75b4635a2e6250ba743ed51b29ef4b937f9fad 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.Aos 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.Aos 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # keys[1]=/tmp/spdk.key-null.Aos 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key sha384 48 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=c171b4505ce3d177395a0f6adcc9dd2b2246ed5d3e58fa70 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.ULP 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key c171b4505ce3d177395a0f6adcc9dd2b2246ed5d3e58fa70 2 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 c171b4505ce3d177395a0f6adcc9dd2b2246ed5d3e58fa70 2 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=c171b4505ce3d177395a0f6adcc9dd2b2246ed5d3e58fa70 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.ULP 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.ULP 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # ckeys[1]=/tmp/spdk.key-sha384.ULP 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=52aa52ee0e8678498769769ad33fce24 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.4pQ 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 52aa52ee0e8678498769769ad33fce24 1 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 52aa52ee0e8678498769769ad33fce24 1 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=52aa52ee0e8678498769769ad33fce24 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.4pQ 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.4pQ 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # keys[2]=/tmp/spdk.key-sha256.4pQ 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=21986c46eae95bddd9be7527d347dcdf 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.1is 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 21986c46eae95bddd9be7527d347dcdf 1 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 21986c46eae95bddd9be7527d347dcdf 1 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=21986c46eae95bddd9be7527d347dcdf 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.1is 00:27:05.332 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.1is 00:27:05.591 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # ckeys[2]=/tmp/spdk.key-sha256.1is 00:27:05.591 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key sha384 48 00:27:05.591 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:27:05.591 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:05.591 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:27:05.591 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:27:05.591 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:27:05.591 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:05.591 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=76bb63df7871cd8ce8421549e18f2a25bafa9acfce7d3387 00:27:05.591 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:27:05.591 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.oUw 00:27:05.591 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 76bb63df7871cd8ce8421549e18f2a25bafa9acfce7d3387 2 00:27:05.591 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 76bb63df7871cd8ce8421549e18f2a25bafa9acfce7d3387 2 00:27:05.591 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:27:05.591 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:05.591 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=76bb63df7871cd8ce8421549e18f2a25bafa9acfce7d3387 00:27:05.591 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:27:05.591 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.oUw 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.oUw 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # keys[3]=/tmp/spdk.key-sha384.oUw 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key null 32 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=4f4041929c5c758d5c0c4a12122ff446 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.A4L 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 4f4041929c5c758d5c0c4a12122ff446 0 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 4f4041929c5c758d5c0c4a12122ff446 0 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=4f4041929c5c758d5c0c4a12122ff446 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.A4L 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.A4L 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # ckeys[3]=/tmp/spdk.key-null.A4L 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # gen_key sha512 64 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=63b200c7b1f868c742d2bb0ce61eeecf5f2c3c3ed076a2f7e51f666c3109c169 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.eaF 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 63b200c7b1f868c742d2bb0ce61eeecf5f2c3c3ed076a2f7e51f666c3109c169 3 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 63b200c7b1f868c742d2bb0ce61eeecf5f2c3c3ed076a2f7e51f666c3109c169 3 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=63b200c7b1f868c742d2bb0ce61eeecf5f2c3c3ed076a2f7e51f666c3109c169 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.eaF 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.eaF 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # keys[4]=/tmp/spdk.key-sha512.eaF 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # ckeys[4]= 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@92 -- # waitforlisten 423292 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 423292 ']' 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:05.592 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:05.851 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:05.851 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:27:05.851 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:27:05.851 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8fm 00:27:05.851 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.851 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:05.851 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.851 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha512.F2m ]] 00:27:05.851 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.F2m 00:27:05.851 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.851 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:05.851 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.851 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:27:05.851 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Aos 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha384.ULP ]] 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ULP 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.4pQ 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha256.1is ]] 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1is 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.oUw 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-null.A4L ]] 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.A4L 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.eaF 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n '' ]] 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@98 -- # nvmet_auth_init 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # get_main_ns_ip 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@639 -- # local block nvme 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:05.852 08:38:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:08.378 Waiting for block devices as requested 00:27:08.378 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:27:08.378 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:08.636 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:08.636 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:08.636 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:08.894 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:08.894 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:08.894 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:08.894 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:09.153 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:09.153 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:09.153 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:09.153 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:09.411 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:09.411 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:09.411 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:09.669 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:10.236 08:38:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:10.236 08:38:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:10.236 08:38:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:10.236 08:38:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:27:10.236 08:38:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:10.236 08:38:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:27:10.236 08:38:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:10.236 08:38:57 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:10.236 08:38:57 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:10.236 No valid GPT data, bailing 00:27:10.236 08:38:57 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:10.236 08:38:57 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:27:10.236 08:38:57 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:27:10.236 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:10.236 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:10.236 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:10.236 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@667 -- # echo 1 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@669 -- # echo 1 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@672 -- # echo tcp 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@673 -- # echo 4420 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@674 -- # echo ipv4 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:10.237 00:27:10.237 Discovery Log Number of Records 2, Generation counter 2 00:27:10.237 =====Discovery Log Entry 0====== 00:27:10.237 trtype: tcp 00:27:10.237 adrfam: ipv4 00:27:10.237 subtype: current discovery subsystem 00:27:10.237 treq: not specified, sq flow control disable supported 00:27:10.237 portid: 1 00:27:10.237 trsvcid: 4420 00:27:10.237 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:10.237 traddr: 10.0.0.1 00:27:10.237 eflags: none 00:27:10.237 sectype: none 00:27:10.237 =====Discovery Log Entry 1====== 00:27:10.237 trtype: tcp 00:27:10.237 adrfam: ipv4 00:27:10.237 subtype: nvme subsystem 00:27:10.237 treq: not specified, sq flow control disable supported 00:27:10.237 portid: 1 00:27:10.237 trsvcid: 4420 00:27:10.237 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:10.237 traddr: 10.0.0.1 00:27:10.237 eflags: none 00:27:10.237 sectype: none 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@37 -- # echo 0 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:10.237 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: ]] 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s sha256,sha384,sha512 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256,sha384,sha512 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:10.496 nvme0n1 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:10.496 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: ]] 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 0 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.497 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:10.755 nvme0n1 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: ]] 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 1 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.755 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:11.014 nvme0n1 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: ]] 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 2 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.014 08:38:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:11.273 nvme0n1 00:27:11.273 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.273 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.273 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:11.273 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.273 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:11.273 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.273 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.273 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: ]] 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 3 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.274 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:11.532 nvme0n1 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 4 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.532 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:11.533 nvme0n1 00:27:11.533 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.533 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.533 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.533 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:11.533 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:11.533 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: ]] 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 0 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.791 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:12.050 nvme0n1 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.050 08:38:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:12.050 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.050 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.050 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.050 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.050 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:12.050 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.050 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:12.050 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:12.050 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.050 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:12.050 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.050 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:27:12.050 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:12.051 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:12.051 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.051 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.051 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:12.051 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: ]] 00:27:12.051 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:12.051 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 1 00:27:12.051 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:12.051 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:12.051 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:27:12.051 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:27:12.051 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.051 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:12.051 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.051 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:12.051 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.309 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:12.309 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:12.309 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:12.309 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:12.309 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.309 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.309 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:12.309 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.309 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:12.309 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:12.309 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:12.309 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:12.309 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.309 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:12.309 nvme0n1 00:27:12.309 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: ]] 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 2 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.310 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:12.568 nvme0n1 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: ]] 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 3 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.568 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:12.569 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.569 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:12.569 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:12.569 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:12.569 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:12.569 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.569 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.569 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:12.569 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.569 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:12.569 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:12.569 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:12.569 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:12.569 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.569 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:12.827 nvme0n1 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 4 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:12.827 08:38:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:12.828 08:38:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:12.828 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.828 08:38:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:13.086 nvme0n1 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.086 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: ]] 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 0 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.652 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:13.910 nvme0n1 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: ]] 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 1 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:13.910 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.911 08:39:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:14.169 nvme0n1 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: ]] 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 2 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.169 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:14.427 nvme0n1 00:27:14.427 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.427 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.427 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.427 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:14.427 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:14.427 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: ]] 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 3 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.686 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:14.943 nvme0n1 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 4 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:14.943 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.944 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:14.944 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:14.944 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:14.944 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:14.944 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.944 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.944 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:14.944 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.944 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:14.944 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:14.944 08:39:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:14.944 08:39:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:14.944 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.944 08:39:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:15.201 nvme0n1 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.201 08:39:02 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: ]] 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 0 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.572 08:39:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:16.830 nvme0n1 00:27:16.830 08:39:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.830 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:16.830 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.830 08:39:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.830 08:39:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:17.087 08:39:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: ]] 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 1 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.088 08:39:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:17.346 nvme0n1 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: ]] 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 2 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:17.346 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.603 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:17.603 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:17.603 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:17.603 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:17.603 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.603 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.603 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:17.603 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.604 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:17.604 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:17.604 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:17.604 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:17.604 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.604 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:17.861 nvme0n1 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: ]] 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 3 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.861 08:39:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:18.427 nvme0n1 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 4 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.427 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:18.685 nvme0n1 00:27:18.685 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.685 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.685 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:18.685 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.685 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:18.685 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.685 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.685 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.685 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.685 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:18.685 08:39:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.685 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:27:18.685 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:18.942 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:18.942 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.942 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:18.942 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:18.942 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:27:18.942 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:18.942 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:18.943 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.943 08:39:05 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.224 08:39:08 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:22.224 08:39:08 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: ]] 00:27:22.224 08:39:08 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:22.224 08:39:08 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 0 00:27:22.224 08:39:08 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:22.224 08:39:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.225 08:39:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:22.225 nvme0n1 00:27:22.225 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.225 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.225 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:22.225 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.225 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:22.225 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.225 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.225 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.225 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.225 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: ]] 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 1 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.483 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:23.049 nvme0n1 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: ]] 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 2 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.049 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:23.050 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:23.050 08:39:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:23.050 08:39:09 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.050 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.050 08:39:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:23.620 nvme0n1 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: ]] 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 3 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.620 08:39:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:24.185 nvme0n1 00:27:24.185 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.185 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.185 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.185 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:24.185 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:24.185 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.185 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.185 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.185 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.185 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:24.443 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.443 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:24.443 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:24.443 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.443 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:24.443 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:24.443 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:27:24.443 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:24.443 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:27:24.443 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 4 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.444 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.009 nvme0n1 00:27:25.009 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.009 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: ]] 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 0 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.010 08:39:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.268 nvme0n1 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: ]] 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 1 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.268 nvme0n1 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:25.268 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: ]] 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 2 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 nvme0n1 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: ]] 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 3 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.527 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.786 nvme0n1 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 4 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.786 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:26.044 nvme0n1 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:26.044 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: ]] 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 0 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.045 08:39:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:26.303 nvme0n1 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: ]] 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 1 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:27:26.303 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.304 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:26.304 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.304 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:26.304 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.304 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:26.304 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:26.304 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:26.304 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:26.304 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.304 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.304 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:26.304 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.304 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:26.304 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:26.304 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:26.304 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:26.304 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.304 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:26.562 nvme0n1 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: ]] 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 2 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.562 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:26.821 nvme0n1 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: ]] 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 3 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.821 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:27.080 nvme0n1 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 4 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.080 08:39:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:27.080 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.080 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:27.080 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:27.080 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:27.080 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:27.080 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.080 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.080 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:27.080 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.080 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:27.080 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:27.080 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:27.080 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:27.080 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.080 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:27.339 nvme0n1 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: ]] 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 0 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.339 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:27.598 nvme0n1 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: ]] 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 1 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.598 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:27.856 nvme0n1 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: ]] 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 2 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.856 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:28.114 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.114 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:28.114 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:28.114 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:28.114 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:28.114 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.114 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.114 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:28.114 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.114 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:28.114 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:28.114 08:39:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:28.114 08:39:14 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.114 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.114 08:39:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:28.371 nvme0n1 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: ]] 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 3 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.371 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:28.628 nvme0n1 00:27:28.628 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 4 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.629 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:28.887 nvme0n1 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: ]] 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 0 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:28.887 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.888 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:28.888 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.888 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:28.888 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:28.888 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:28.888 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:28.888 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.888 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.888 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:28.888 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.888 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:28.888 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:28.888 08:39:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:28.888 08:39:15 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:28.888 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.888 08:39:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:29.454 nvme0n1 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: ]] 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 1 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.454 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:29.712 nvme0n1 00:27:29.712 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.712 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.712 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:29.712 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.712 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:29.712 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.712 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.713 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.713 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.713 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: ]] 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 2 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.971 08:39:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:30.230 nvme0n1 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: ]] 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 3 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.230 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:30.797 nvme0n1 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 4 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.797 08:39:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:31.056 nvme0n1 00:27:31.056 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.056 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.056 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.056 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:31.056 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:31.056 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.056 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.056 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.056 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.056 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: ]] 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 0 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.315 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:31.883 nvme0n1 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: ]] 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 1 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.883 08:39:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:32.449 nvme0n1 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: ]] 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 2 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:32.449 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.450 08:39:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:33.017 nvme0n1 00:27:33.017 08:39:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.017 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.017 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:33.017 08:39:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.017 08:39:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:33.017 08:39:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.017 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.017 08:39:19 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.017 08:39:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.017 08:39:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: ]] 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 3 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.017 08:39:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:33.952 nvme0n1 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 4 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:33.952 08:39:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:33.953 08:39:20 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.953 08:39:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.953 08:39:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 nvme0n1 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: ]] 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 0 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 nvme0n1 00:27:34.519 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.520 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.520 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:34.520 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.520 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:34.520 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.520 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.520 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.520 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.520 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: ]] 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 1 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:34.778 nvme0n1 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: ]] 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 2 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:34.778 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.779 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:34.779 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.779 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:34.779 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:34.779 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:34.779 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:34.779 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.779 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.779 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:34.779 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.779 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:34.779 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:34.779 08:39:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:34.779 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.779 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.779 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:35.037 nvme0n1 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: ]] 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 3 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:35.037 08:39:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.037 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:35.037 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:35.037 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:35.037 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:35.037 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.037 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.037 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:35.037 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.037 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:35.037 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:35.037 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:35.037 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:35.037 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.037 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:35.295 nvme0n1 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 4 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:35.295 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:35.296 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:35.296 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.296 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.296 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:35.296 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.296 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:35.296 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:35.296 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:35.296 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:35.296 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.296 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:35.554 nvme0n1 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: ]] 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 0 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.554 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:35.812 nvme0n1 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: ]] 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 1 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.812 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:36.072 nvme0n1 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: ]] 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 2 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.072 08:39:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:36.329 nvme0n1 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: ]] 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 3 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:36.329 nvme0n1 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:36.329 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 4 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:36.588 nvme0n1 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:36.588 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: ]] 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 0 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:36.846 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.847 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.847 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:36.847 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.847 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:36.847 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:36.847 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:36.847 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:36.847 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.847 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:37.105 nvme0n1 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: ]] 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 1 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.105 08:39:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:37.363 nvme0n1 00:27:37.363 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.363 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.363 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: ]] 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 2 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.364 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:37.622 nvme0n1 00:27:37.622 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.622 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.622 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.622 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:37.622 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:37.622 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.622 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.622 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.622 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.622 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:37.622 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.622 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:37.622 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:37.622 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: ]] 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 3 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.623 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:37.882 nvme0n1 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 4 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.882 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:38.141 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.141 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:38.141 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:38.141 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:38.141 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:38.141 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.141 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.141 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:38.141 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.141 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:38.141 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:38.141 08:39:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:38.141 08:39:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:38.141 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.141 08:39:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:38.141 nvme0n1 00:27:38.141 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.141 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.141 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:38.141 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.141 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:38.141 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: ]] 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 0 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:38.400 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:38.401 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:38.401 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.401 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:38.660 nvme0n1 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: ]] 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 1 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.660 08:39:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:39.227 nvme0n1 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: ]] 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 2 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.227 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:39.486 nvme0n1 00:27:39.486 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.486 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.486 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:39.486 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.486 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:39.486 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: ]] 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 3 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.744 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:40.004 nvme0n1 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 4 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.004 08:39:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:40.573 nvme0n1 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTM5ZjAzYTAzNGRkODcxMmY3OGIzYTFjNzQ5YjJiZTeBTxgM: 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: ]] 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YzE0MWRlZDlhOTJmM2U5MGI5NmE5MzY1ZWJiNDQyYWU0NTFkMDVhZWViYjk5N2VlMWI0ZDA5NzI5N2VlOTgxY7G5UcE=: 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 0 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.573 08:39:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:41.142 nvme0n1 00:27:41.142 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.142 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.142 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.142 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:41.142 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:41.142 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.142 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.142 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.142 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.142 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: ]] 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 1 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.143 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:41.711 nvme0n1 00:27:41.711 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.711 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.711 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.711 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:41.711 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:41.711 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.711 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.711 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.711 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.711 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NTJhYTUyZWUwZTg2Nzg0OTg3Njk3NjlhZDMzZmNlMjQ8nLip: 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: ]] 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjE5ODZjNDZlYWU5NWJkZGQ5YmU3NTI3ZDM0N2RjZGbFVf5O: 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 2 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.969 08:39:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:42.536 nvme0n1 00:27:42.536 08:39:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.536 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.536 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:42.536 08:39:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.536 08:39:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:42.536 08:39:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.536 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.536 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NzZiYjYzZGY3ODcxY2Q4Y2U4NDIxNTQ5ZTE4ZjJhMjViYWZhOWFjZmNlN2QzMzg3JL0xeg==: 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: ]] 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NGY0MDQxOTI5YzVjNzU4ZDVjMGM0YTEyMTIyZmY0NDatapId: 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 3 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.537 08:39:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:43.104 nvme0n1 00:27:43.104 08:39:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.104 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.104 08:39:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.104 08:39:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:43.104 08:39:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:43.104 08:39:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NjNiMjAwYzdiMWY4NjhjNzQyZDJiYjBjZTYxZWVlY2Y1ZjJjM2MzZWQwNzZhMmY3ZTUxZjY2NmMzMTA5YzE2OXeSIQw=: 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 4 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.104 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:43.669 nvme0n1 00:27:43.669 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.669 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:27:43.669 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.669 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.669 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:43.669 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.670 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.670 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.670 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.670 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:43.670 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.670 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@123 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:43.670 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.670 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:27:43.670 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.670 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:27:43.670 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:43.670 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:43.670 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:43.670 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTE4YThlMTFmZDc1YjQ2MzVhMmU2MjUwYmE3NDNlZDUxYjI5ZWY0YjkzN2Y5ZmFkJguSkg==: 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: ]] 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzE3MWI0NTA1Y2UzZDE3NzM5NWEwZjZhZGNjOWRkMmIyMjQ2ZWQ1ZDNlNThmYTcwhQm3kw==: 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@124 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # get_main_ns_ip 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:43.928 request: 00:27:43.928 { 00:27:43.928 "name": "nvme0", 00:27:43.928 "trtype": "tcp", 00:27:43.928 "traddr": "10.0.0.1", 00:27:43.928 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:43.928 "adrfam": "ipv4", 00:27:43.928 "trsvcid": "4420", 00:27:43.928 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:43.928 "method": "bdev_nvme_attach_controller", 00:27:43.928 "req_id": 1 00:27:43.928 } 00:27:43.928 Got JSON-RPC error response 00:27:43.928 response: 00:27:43.928 { 00:27:43.928 "code": -32602, 00:27:43.928 "message": "Invalid parameters" 00:27:43.928 } 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # jq length 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # (( 0 == 0 )) 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # get_main_ns_ip 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:43.928 request: 00:27:43.928 { 00:27:43.928 "name": "nvme0", 00:27:43.928 "trtype": "tcp", 00:27:43.928 "traddr": "10.0.0.1", 00:27:43.928 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:43.928 "adrfam": "ipv4", 00:27:43.928 "trsvcid": "4420", 00:27:43.928 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:43.928 "dhchap_key": "key2", 00:27:43.928 "method": "bdev_nvme_attach_controller", 00:27:43.928 "req_id": 1 00:27:43.928 } 00:27:43.928 Got JSON-RPC error response 00:27:43.928 response: 00:27:43.928 { 00:27:43.928 "code": -32602, 00:27:43.928 "message": "Invalid parameters" 00:27:43.928 } 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # jq length 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # (( 0 == 0 )) 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # get_main_ns_ip 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:43.928 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:43.929 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:43.929 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:43.929 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.929 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:44.186 request: 00:27:44.186 { 00:27:44.186 "name": "nvme0", 00:27:44.186 "trtype": "tcp", 00:27:44.186 "traddr": "10.0.0.1", 00:27:44.186 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:44.186 "adrfam": "ipv4", 00:27:44.186 "trsvcid": "4420", 00:27:44.186 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:44.186 "dhchap_key": "key1", 00:27:44.186 "dhchap_ctrlr_key": "ckey2", 00:27:44.186 "method": "bdev_nvme_attach_controller", 00:27:44.186 "req_id": 1 00:27:44.186 } 00:27:44.187 Got JSON-RPC error response 00:27:44.187 response: 00:27:44.187 { 00:27:44.187 "code": -32602, 00:27:44.187 "message": "Invalid parameters" 00:27:44.187 } 00:27:44.187 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:44.187 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:27:44.187 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:44.187 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:44.187 08:39:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:44.187 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@140 -- # trap - SIGINT SIGTERM EXIT 00:27:44.187 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@141 -- # cleanup 00:27:44.187 08:39:30 nvmf_tcp.nvmf_auth -- host/auth.sh@24 -- # nvmftestfini 00:27:44.187 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:44.187 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@117 -- # sync 00:27:44.187 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:44.187 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@120 -- # set +e 00:27:44.187 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:44.187 08:39:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:44.187 rmmod nvme_tcp 00:27:44.187 rmmod nvme_fabrics 00:27:44.187 08:39:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:44.187 08:39:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@124 -- # set -e 00:27:44.187 08:39:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@125 -- # return 0 00:27:44.187 08:39:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@489 -- # '[' -n 423292 ']' 00:27:44.187 08:39:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@490 -- # killprocess 423292 00:27:44.187 08:39:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@946 -- # '[' -z 423292 ']' 00:27:44.187 08:39:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@950 -- # kill -0 423292 00:27:44.187 08:39:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # uname 00:27:44.187 08:39:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:44.187 08:39:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 423292 00:27:44.187 08:39:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:44.187 08:39:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:44.187 08:39:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 423292' 00:27:44.187 killing process with pid 423292 00:27:44.187 08:39:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@965 -- # kill 423292 00:27:44.187 08:39:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@970 -- # wait 423292 00:27:44.445 08:39:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:44.445 08:39:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:44.445 08:39:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:44.445 08:39:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:44.445 08:39:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:44.445 08:39:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.445 08:39:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.445 08:39:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.348 08:39:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:46.348 08:39:33 nvmf_tcp.nvmf_auth -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:46.348 08:39:33 nvmf_tcp.nvmf_auth -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:46.348 08:39:33 nvmf_tcp.nvmf_auth -- host/auth.sh@27 -- # clean_kernel_target 00:27:46.348 08:39:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:46.348 08:39:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@686 -- # echo 0 00:27:46.348 08:39:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:46.348 08:39:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:46.348 08:39:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:46.348 08:39:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:46.348 08:39:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:46.348 08:39:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:46.348 08:39:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:48.871 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:48.871 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:48.871 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:48.871 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:48.871 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:48.871 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:48.871 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:48.871 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:48.871 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:48.871 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:49.128 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:49.128 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:49.128 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:49.128 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:49.128 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:49.128 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:50.063 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:50.063 08:39:36 nvmf_tcp.nvmf_auth -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.8fm /tmp/spdk.key-null.Aos /tmp/spdk.key-sha256.4pQ /tmp/spdk.key-sha384.oUw /tmp/spdk.key-sha512.eaF /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:50.063 08:39:36 nvmf_tcp.nvmf_auth -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:52.589 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:52.589 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:52.589 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:52.589 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:52.589 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:52.589 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:52.590 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:52.590 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:52.590 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:52.590 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:52.590 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:52.590 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:52.590 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:52.590 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:52.590 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:52.590 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:52.590 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:52.590 00:27:52.590 real 0m53.746s 00:27:52.590 user 0m48.662s 00:27:52.590 sys 0m11.201s 00:27:52.590 08:39:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:52.590 08:39:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:27:52.590 ************************************ 00:27:52.590 END TEST nvmf_auth 00:27:52.590 ************************************ 00:27:52.590 08:39:39 nvmf_tcp -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:27:52.590 08:39:39 nvmf_tcp -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:52.590 08:39:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:52.590 08:39:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:52.590 08:39:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:52.590 ************************************ 00:27:52.590 START TEST nvmf_digest 00:27:52.590 ************************************ 00:27:52.590 08:39:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:52.848 * Looking for test storage... 00:27:52.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.848 08:39:39 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:27:52.849 08:39:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:58.221 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.221 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:27:58.221 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:58.221 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:58.221 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:58.221 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:58.221 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:58.221 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:27:58.221 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:58.221 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:27:58.221 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:27:58.221 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:27:58.221 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:27:58.221 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:27:58.221 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:27:58.221 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.221 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:58.222 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:58.222 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:58.222 Found net devices under 0000:86:00.0: cvl_0_0 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:58.222 Found net devices under 0000:86:00.1: cvl_0_1 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:58.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:27:58.222 00:27:58.222 --- 10.0.0.2 ping statistics --- 00:27:58.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.222 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:27:58.222 00:27:58.222 --- 10.0.0.1 ping statistics --- 00:27:58.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.222 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:58.222 ************************************ 00:27:58.222 START TEST nvmf_digest_clean 00:27:58.222 ************************************ 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=437790 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 437790 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 437790 ']' 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.222 08:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:58.223 08:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:58.223 08:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:58.223 [2024-05-15 08:39:44.914295] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:27:58.223 [2024-05-15 08:39:44.914334] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.223 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.223 [2024-05-15 08:39:44.971287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.223 [2024-05-15 08:39:45.049610] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.223 [2024-05-15 08:39:45.049642] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.223 [2024-05-15 08:39:45.049649] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.223 [2024-05-15 08:39:45.049655] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.223 [2024-05-15 08:39:45.049660] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.223 [2024-05-15 08:39:45.049676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.838 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:58.838 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:58.838 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:58.838 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:58.838 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:58.838 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.838 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:58.838 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:58.838 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:58.838 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.838 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:58.838 null0 00:27:58.838 [2024-05-15 08:39:45.823928] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.838 [2024-05-15 08:39:45.847935] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:58.838 [2024-05-15 08:39:45.848123] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.116 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.116 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:59.116 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:59.116 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:59.116 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:59.116 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:59.116 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:59.117 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:59.117 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=438039 00:27:59.117 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 438039 /var/tmp/bperf.sock 00:27:59.117 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 438039 ']' 00:27:59.117 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:59.117 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:59.117 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:59.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:59.117 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:59.117 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:59.117 08:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:59.117 [2024-05-15 08:39:45.897169] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:27:59.117 [2024-05-15 08:39:45.897211] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438039 ] 00:27:59.117 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.117 [2024-05-15 08:39:45.950528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.117 [2024-05-15 08:39:46.022472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.721 08:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:59.721 08:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:59.721 08:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:59.721 08:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:59.721 08:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:00.005 08:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:00.005 08:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:00.287 nvme0n1 00:28:00.287 08:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:00.287 08:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:00.287 Running I/O for 2 seconds... 00:28:02.871 00:28:02.871 Latency(us) 00:28:02.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.871 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:02.871 nvme0n1 : 2.01 25041.20 97.82 0.00 0.00 5105.27 2464.72 15386.71 00:28:02.871 =================================================================================================================== 00:28:02.871 Total : 25041.20 97.82 0.00 0.00 5105.27 2464.72 15386.71 00:28:02.871 0 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:02.871 | select(.opcode=="crc32c") 00:28:02.871 | "\(.module_name) \(.executed)"' 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 438039 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 438039 ']' 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 438039 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 438039 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 438039' 00:28:02.871 killing process with pid 438039 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 438039 00:28:02.871 Received shutdown signal, test time was about 2.000000 seconds 00:28:02.871 00:28:02.871 Latency(us) 00:28:02.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.871 =================================================================================================================== 00:28:02.871 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 438039 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=438635 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 438635 /var/tmp/bperf.sock 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 438635 ']' 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:02.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:02.871 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:02.872 08:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:02.872 [2024-05-15 08:39:49.787050] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:28:02.872 [2024-05-15 08:39:49.787099] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438635 ] 00:28:02.872 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:02.872 Zero copy mechanism will not be used. 00:28:02.872 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.872 [2024-05-15 08:39:49.841335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.131 [2024-05-15 08:39:49.919945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.698 08:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:03.698 08:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:28:03.698 08:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:03.698 08:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:03.698 08:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:03.958 08:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:03.958 08:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:04.217 nvme0n1 00:28:04.217 08:39:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:04.217 08:39:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:04.217 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:04.217 Zero copy mechanism will not be used. 00:28:04.217 Running I/O for 2 seconds... 00:28:06.150 00:28:06.150 Latency(us) 00:28:06.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.150 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:06.150 nvme0n1 : 2.00 5606.53 700.82 0.00 0.00 2850.80 651.80 5185.89 00:28:06.150 =================================================================================================================== 00:28:06.150 Total : 5606.53 700.82 0.00 0.00 2850.80 651.80 5185.89 00:28:06.150 0 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:06.409 | select(.opcode=="crc32c") 00:28:06.409 | "\(.module_name) \(.executed)"' 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 438635 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 438635 ']' 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 438635 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 438635 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 438635' 00:28:06.409 killing process with pid 438635 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 438635 00:28:06.409 Received shutdown signal, test time was about 2.000000 seconds 00:28:06.409 00:28:06.409 Latency(us) 00:28:06.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.409 =================================================================================================================== 00:28:06.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:06.409 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 438635 00:28:06.668 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:06.668 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:06.668 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:06.668 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:06.668 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:06.668 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:06.668 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:06.668 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=439230 00:28:06.668 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 439230 /var/tmp/bperf.sock 00:28:06.668 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 439230 ']' 00:28:06.668 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:06.668 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:06.668 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:06.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:06.668 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:06.668 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:06.668 08:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:06.668 [2024-05-15 08:39:53.648322] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:28:06.668 [2024-05-15 08:39:53.648368] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439230 ] 00:28:06.668 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.927 [2024-05-15 08:39:53.702130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.927 [2024-05-15 08:39:53.769689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.497 08:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:07.497 08:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:28:07.497 08:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:07.497 08:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:07.497 08:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:07.756 08:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:07.756 08:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:08.015 nvme0n1 00:28:08.015 08:39:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:08.015 08:39:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:08.273 Running I/O for 2 seconds... 00:28:10.179 00:28:10.179 Latency(us) 00:28:10.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.179 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:10.179 nvme0n1 : 2.00 28504.16 111.34 0.00 0.00 4486.19 2137.04 14816.83 00:28:10.179 =================================================================================================================== 00:28:10.179 Total : 28504.16 111.34 0.00 0.00 4486.19 2137.04 14816.83 00:28:10.179 0 00:28:10.179 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:10.179 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:10.179 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:10.180 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:10.180 | select(.opcode=="crc32c") 00:28:10.180 | "\(.module_name) \(.executed)"' 00:28:10.180 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:10.439 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:10.439 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:10.439 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:10.439 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:10.439 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 439230 00:28:10.439 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 439230 ']' 00:28:10.439 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 439230 00:28:10.439 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:28:10.439 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:10.439 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 439230 00:28:10.439 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:10.439 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:10.439 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 439230' 00:28:10.439 killing process with pid 439230 00:28:10.439 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 439230 00:28:10.439 Received shutdown signal, test time was about 2.000000 seconds 00:28:10.439 00:28:10.439 Latency(us) 00:28:10.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.439 =================================================================================================================== 00:28:10.439 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:10.439 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 439230 00:28:10.699 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:10.699 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:10.699 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:10.699 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:10.699 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:10.699 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:10.699 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:10.699 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=439927 00:28:10.699 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 439927 /var/tmp/bperf.sock 00:28:10.699 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:10.699 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 439927 ']' 00:28:10.699 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:10.699 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:10.699 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:10.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:10.699 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:10.699 08:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:10.699 [2024-05-15 08:39:57.598283] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:28:10.699 [2024-05-15 08:39:57.598328] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439927 ] 00:28:10.699 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:10.699 Zero copy mechanism will not be used. 00:28:10.699 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.699 [2024-05-15 08:39:57.651717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.699 [2024-05-15 08:39:57.718815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.636 08:39:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:11.636 08:39:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:28:11.636 08:39:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:11.636 08:39:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:11.636 08:39:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:11.636 08:39:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:11.636 08:39:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.205 nvme0n1 00:28:12.205 08:39:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:12.205 08:39:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:12.205 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:12.205 Zero copy mechanism will not be used. 00:28:12.205 Running I/O for 2 seconds... 00:28:14.111 00:28:14.111 Latency(us) 00:28:14.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.111 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:14.111 nvme0n1 : 2.00 6476.85 809.61 0.00 0.00 2466.16 1723.88 9061.06 00:28:14.111 =================================================================================================================== 00:28:14.111 Total : 6476.85 809.61 0.00 0.00 2466.16 1723.88 9061.06 00:28:14.111 0 00:28:14.111 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:14.111 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:14.111 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:14.111 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:14.111 | select(.opcode=="crc32c") 00:28:14.111 | "\(.module_name) \(.executed)"' 00:28:14.111 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:14.370 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:14.370 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:14.370 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:14.370 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:14.370 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 439927 00:28:14.370 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 439927 ']' 00:28:14.370 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 439927 00:28:14.370 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:28:14.370 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:14.370 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 439927 00:28:14.370 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:14.370 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:14.370 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 439927' 00:28:14.370 killing process with pid 439927 00:28:14.370 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 439927 00:28:14.370 Received shutdown signal, test time was about 2.000000 seconds 00:28:14.370 00:28:14.370 Latency(us) 00:28:14.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.370 =================================================================================================================== 00:28:14.370 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:14.370 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 439927 00:28:14.630 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 437790 00:28:14.630 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 437790 ']' 00:28:14.630 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 437790 00:28:14.630 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:28:14.630 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:14.630 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 437790 00:28:14.630 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:14.630 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:14.630 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 437790' 00:28:14.630 killing process with pid 437790 00:28:14.630 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 437790 00:28:14.630 [2024-05-15 08:40:01.601245] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:14.630 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 437790 00:28:14.888 00:28:14.889 real 0m16.950s 00:28:14.889 user 0m32.424s 00:28:14.889 sys 0m4.444s 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:14.889 ************************************ 00:28:14.889 END TEST nvmf_digest_clean 00:28:14.889 ************************************ 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:14.889 ************************************ 00:28:14.889 START TEST nvmf_digest_error 00:28:14.889 ************************************ 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=440654 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 440654 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 440654 ']' 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:14.889 08:40:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:15.147 [2024-05-15 08:40:01.931037] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:28:15.147 [2024-05-15 08:40:01.931073] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.147 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.147 [2024-05-15 08:40:01.987911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.147 [2024-05-15 08:40:02.066532] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.147 [2024-05-15 08:40:02.066565] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.147 [2024-05-15 08:40:02.066573] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.147 [2024-05-15 08:40:02.066579] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.147 [2024-05-15 08:40:02.066584] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.147 [2024-05-15 08:40:02.066605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.082 [2024-05-15 08:40:02.772663] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.082 null0 00:28:16.082 [2024-05-15 08:40:02.860939] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.082 [2024-05-15 08:40:02.884946] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:16.082 [2024-05-15 08:40:02.885146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=440899 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 440899 /var/tmp/bperf.sock 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 440899 ']' 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:16.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:16.082 08:40:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.082 [2024-05-15 08:40:02.919655] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:28:16.082 [2024-05-15 08:40:02.919693] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440899 ] 00:28:16.082 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.082 [2024-05-15 08:40:02.971565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.082 [2024-05-15 08:40:03.049829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.014 08:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:17.014 08:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:28:17.014 08:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:17.014 08:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:17.014 08:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:17.014 08:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.014 08:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.014 08:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.014 08:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.014 08:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.272 nvme0n1 00:28:17.272 08:40:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:17.272 08:40:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.272 08:40:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.272 08:40:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.272 08:40:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:17.272 08:40:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:17.530 Running I/O for 2 seconds... 00:28:17.530 [2024-05-15 08:40:04.348419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.530 [2024-05-15 08:40:04.348451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.530 [2024-05-15 08:40:04.348461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.530 [2024-05-15 08:40:04.359339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.530 [2024-05-15 08:40:04.359362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.530 [2024-05-15 08:40:04.359371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.530 [2024-05-15 08:40:04.367131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.530 [2024-05-15 08:40:04.367151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.530 [2024-05-15 08:40:04.367159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.530 [2024-05-15 08:40:04.378445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.530 [2024-05-15 08:40:04.378464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.530 [2024-05-15 08:40:04.378473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.530 [2024-05-15 08:40:04.390817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.530 [2024-05-15 08:40:04.390837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.530 [2024-05-15 08:40:04.390845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.530 [2024-05-15 08:40:04.399219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.530 [2024-05-15 08:40:04.399238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.530 [2024-05-15 08:40:04.399245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.530 [2024-05-15 08:40:04.411135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.530 [2024-05-15 08:40:04.411155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.530 [2024-05-15 08:40:04.411162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.530 [2024-05-15 08:40:04.422886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.530 [2024-05-15 08:40:04.422905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.530 [2024-05-15 08:40:04.422913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.530 [2024-05-15 08:40:04.431477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.530 [2024-05-15 08:40:04.431499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.530 [2024-05-15 08:40:04.431508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.530 [2024-05-15 08:40:04.444247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.530 [2024-05-15 08:40:04.444267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.530 [2024-05-15 08:40:04.444277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.530 [2024-05-15 08:40:04.454884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.530 [2024-05-15 08:40:04.454902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.530 [2024-05-15 08:40:04.454909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.530 [2024-05-15 08:40:04.464379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.530 [2024-05-15 08:40:04.464398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.530 [2024-05-15 08:40:04.464406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.530 [2024-05-15 08:40:04.475524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.530 [2024-05-15 08:40:04.475546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.530 [2024-05-15 08:40:04.475556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.530 [2024-05-15 08:40:04.485510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.530 [2024-05-15 08:40:04.485530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.530 [2024-05-15 08:40:04.485539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.530 [2024-05-15 08:40:04.493639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.530 [2024-05-15 08:40:04.493659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.530 [2024-05-15 08:40:04.493668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.530 [2024-05-15 08:40:04.502686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.530 [2024-05-15 08:40:04.502706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.530 [2024-05-15 08:40:04.502714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.530 [2024-05-15 08:40:04.512691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.530 [2024-05-15 08:40:04.512710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.530 [2024-05-15 08:40:04.512718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.530 [2024-05-15 08:40:04.522151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.530 [2024-05-15 08:40:04.522176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.530 [2024-05-15 08:40:04.522185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.530 [2024-05-15 08:40:04.531939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.531 [2024-05-15 08:40:04.531961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.531 [2024-05-15 08:40:04.531969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.531 [2024-05-15 08:40:04.540651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.531 [2024-05-15 08:40:04.540671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.531 [2024-05-15 08:40:04.540679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.531 [2024-05-15 08:40:04.550988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.531 [2024-05-15 08:40:04.551007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.531 [2024-05-15 08:40:04.551015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.559574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.559593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.559601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.569329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.569348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.569356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.577651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.577670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.577678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.588678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.588697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.588705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.600418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.600436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.600445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.612081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.612099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.612111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.621433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.621451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.621458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.633786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.633805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.633813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.642300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.642321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.642329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.653158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.653181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.653189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.661474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.661492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.661500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.673809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.673827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.673835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.685929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.685948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.685958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.694512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.694530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.694538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.707236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.707258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.707266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.715706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.715724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.715732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.727652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.727671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.727679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.735866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.735885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.735892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.746224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.746242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.746250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.755305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.755324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.755332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.765262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.765281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.765289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.773914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.773933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.773941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.784525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.784544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.784552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.795744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.795762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.795770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-05-15 08:40:04.805917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:17.789 [2024-05-15 08:40:04.805936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-05-15 08:40:04.805944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.814740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.814760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.814767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.826141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.826159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.826171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.835188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.835206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.835215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.846910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.846928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.846936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.857812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.857831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.857838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.867229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.867247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.867255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.876979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.876998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.877009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.886939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.886958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.886966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.895296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.895315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.895323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.904917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.904936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.904944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.914245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.914264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.914271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.925067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.925086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.925094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.933503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.933523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.933531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.945789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.945808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.945816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.954219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.954238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.954246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.964145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.964171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.964180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.973634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.973653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.973661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.982119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.982138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.982145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:04.992340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.048 [2024-05-15 08:40:04.992359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.048 [2024-05-15 08:40:04.992367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.048 [2024-05-15 08:40:05.001538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.049 [2024-05-15 08:40:05.001557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.049 [2024-05-15 08:40:05.001565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.049 [2024-05-15 08:40:05.011454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.049 [2024-05-15 08:40:05.011473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.049 [2024-05-15 08:40:05.011481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.049 [2024-05-15 08:40:05.020741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.049 [2024-05-15 08:40:05.020760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.049 [2024-05-15 08:40:05.020768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.049 [2024-05-15 08:40:05.030294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.049 [2024-05-15 08:40:05.030313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.049 [2024-05-15 08:40:05.030320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.049 [2024-05-15 08:40:05.038705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.049 [2024-05-15 08:40:05.038723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.049 [2024-05-15 08:40:05.038734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.049 [2024-05-15 08:40:05.048216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.049 [2024-05-15 08:40:05.048235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.049 [2024-05-15 08:40:05.048242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.049 [2024-05-15 08:40:05.058002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.049 [2024-05-15 08:40:05.058021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.049 [2024-05-15 08:40:05.058029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.049 [2024-05-15 08:40:05.066435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.049 [2024-05-15 08:40:05.066454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.049 [2024-05-15 08:40:05.066461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.306 [2024-05-15 08:40:05.076664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.306 [2024-05-15 08:40:05.076683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.306 [2024-05-15 08:40:05.076691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.306 [2024-05-15 08:40:05.085878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.306 [2024-05-15 08:40:05.085897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.306 [2024-05-15 08:40:05.085904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.306 [2024-05-15 08:40:05.095883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.306 [2024-05-15 08:40:05.095902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.306 [2024-05-15 08:40:05.095909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.306 [2024-05-15 08:40:05.106729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.306 [2024-05-15 08:40:05.106747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.306 [2024-05-15 08:40:05.106755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.306 [2024-05-15 08:40:05.114776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.306 [2024-05-15 08:40:05.114795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.306 [2024-05-15 08:40:05.114803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.306 [2024-05-15 08:40:05.124979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.306 [2024-05-15 08:40:05.125005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.306 [2024-05-15 08:40:05.125013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.306 [2024-05-15 08:40:05.134798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.306 [2024-05-15 08:40:05.134817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.306 [2024-05-15 08:40:05.134825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.306 [2024-05-15 08:40:05.142813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.306 [2024-05-15 08:40:05.142833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.306 [2024-05-15 08:40:05.142842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.306 [2024-05-15 08:40:05.153983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.306 [2024-05-15 08:40:05.154002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.306 [2024-05-15 08:40:05.154010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.306 [2024-05-15 08:40:05.166136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.306 [2024-05-15 08:40:05.166157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.306 [2024-05-15 08:40:05.166170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.306 [2024-05-15 08:40:05.176308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.306 [2024-05-15 08:40:05.176328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.306 [2024-05-15 08:40:05.176336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.307 [2024-05-15 08:40:05.185781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.307 [2024-05-15 08:40:05.185801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.307 [2024-05-15 08:40:05.185809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.307 [2024-05-15 08:40:05.195359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.307 [2024-05-15 08:40:05.195380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.307 [2024-05-15 08:40:05.195387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.307 [2024-05-15 08:40:05.204285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.307 [2024-05-15 08:40:05.204303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.307 [2024-05-15 08:40:05.204310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.307 [2024-05-15 08:40:05.214331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.307 [2024-05-15 08:40:05.214351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.307 [2024-05-15 08:40:05.214359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.307 [2024-05-15 08:40:05.222282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.307 [2024-05-15 08:40:05.222301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.307 [2024-05-15 08:40:05.222309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.307 [2024-05-15 08:40:05.233525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.307 [2024-05-15 08:40:05.233545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.307 [2024-05-15 08:40:05.233553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.307 [2024-05-15 08:40:05.244186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.307 [2024-05-15 08:40:05.244205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.307 [2024-05-15 08:40:05.244213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.307 [2024-05-15 08:40:05.252333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.307 [2024-05-15 08:40:05.252353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.307 [2024-05-15 08:40:05.252361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.307 [2024-05-15 08:40:05.263598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.307 [2024-05-15 08:40:05.263619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.307 [2024-05-15 08:40:05.263627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.307 [2024-05-15 08:40:05.273838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.307 [2024-05-15 08:40:05.273857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.307 [2024-05-15 08:40:05.273866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.307 [2024-05-15 08:40:05.283700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.307 [2024-05-15 08:40:05.283719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.307 [2024-05-15 08:40:05.283726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.307 [2024-05-15 08:40:05.293522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.307 [2024-05-15 08:40:05.293542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.307 [2024-05-15 08:40:05.293553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.307 [2024-05-15 08:40:05.304237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.307 [2024-05-15 08:40:05.304257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.307 [2024-05-15 08:40:05.304265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.307 [2024-05-15 08:40:05.313334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.307 [2024-05-15 08:40:05.313353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.307 [2024-05-15 08:40:05.313361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.307 [2024-05-15 08:40:05.321741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.307 [2024-05-15 08:40:05.321760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.307 [2024-05-15 08:40:05.321768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.565 [2024-05-15 08:40:05.331986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.565 [2024-05-15 08:40:05.332007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.565 [2024-05-15 08:40:05.332015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.565 [2024-05-15 08:40:05.341490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.565 [2024-05-15 08:40:05.341510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.565 [2024-05-15 08:40:05.341518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.565 [2024-05-15 08:40:05.351544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.565 [2024-05-15 08:40:05.351563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.565 [2024-05-15 08:40:05.351571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.565 [2024-05-15 08:40:05.360102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.565 [2024-05-15 08:40:05.360122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.565 [2024-05-15 08:40:05.360130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.565 [2024-05-15 08:40:05.372513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.565 [2024-05-15 08:40:05.372534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.565 [2024-05-15 08:40:05.372542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.565 [2024-05-15 08:40:05.385156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.565 [2024-05-15 08:40:05.385186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.565 [2024-05-15 08:40:05.385195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.565 [2024-05-15 08:40:05.397746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.565 [2024-05-15 08:40:05.397766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.565 [2024-05-15 08:40:05.397774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.565 [2024-05-15 08:40:05.408924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.565 [2024-05-15 08:40:05.408943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.565 [2024-05-15 08:40:05.408952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.565 [2024-05-15 08:40:05.418814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.565 [2024-05-15 08:40:05.418833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.565 [2024-05-15 08:40:05.418841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.566 [2024-05-15 08:40:05.427351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.566 [2024-05-15 08:40:05.427370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.566 [2024-05-15 08:40:05.427377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.566 [2024-05-15 08:40:05.440187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.566 [2024-05-15 08:40:05.440207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.566 [2024-05-15 08:40:05.440215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.566 [2024-05-15 08:40:05.448539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.566 [2024-05-15 08:40:05.448558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.566 [2024-05-15 08:40:05.448566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.566 [2024-05-15 08:40:05.460171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.566 [2024-05-15 08:40:05.460191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.566 [2024-05-15 08:40:05.460199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.566 [2024-05-15 08:40:05.472725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.566 [2024-05-15 08:40:05.472746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.566 [2024-05-15 08:40:05.472754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.566 [2024-05-15 08:40:05.480861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.566 [2024-05-15 08:40:05.480880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.566 [2024-05-15 08:40:05.480888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.566 [2024-05-15 08:40:05.492583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.566 [2024-05-15 08:40:05.492603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.566 [2024-05-15 08:40:05.492611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.566 [2024-05-15 08:40:05.505199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.566 [2024-05-15 08:40:05.505218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.566 [2024-05-15 08:40:05.505226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.566 [2024-05-15 08:40:05.515720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.566 [2024-05-15 08:40:05.515740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.566 [2024-05-15 08:40:05.515747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.566 [2024-05-15 08:40:05.525078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.566 [2024-05-15 08:40:05.525097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.566 [2024-05-15 08:40:05.525105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.566 [2024-05-15 08:40:05.533512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.566 [2024-05-15 08:40:05.533532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.566 [2024-05-15 08:40:05.533539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.566 [2024-05-15 08:40:05.543979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.566 [2024-05-15 08:40:05.543999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.566 [2024-05-15 08:40:05.544006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.566 [2024-05-15 08:40:05.553340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.566 [2024-05-15 08:40:05.553360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.566 [2024-05-15 08:40:05.553368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.566 [2024-05-15 08:40:05.562905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.566 [2024-05-15 08:40:05.562928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.566 [2024-05-15 08:40:05.562936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.566 [2024-05-15 08:40:05.571741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.566 [2024-05-15 08:40:05.571759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.566 [2024-05-15 08:40:05.571768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.566 [2024-05-15 08:40:05.580852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.566 [2024-05-15 08:40:05.580870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.566 [2024-05-15 08:40:05.580878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.590312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.590331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.590338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.599824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.599842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.599850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.609081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.609105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.609113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.618550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.618569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.618576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.627527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.627546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.627553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.637863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.637881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.637889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.647517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.647536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.647544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.656929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.656947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.656955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.666738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.666757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.666765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.677473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.677492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.677499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.685811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.685829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.685838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.695172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.695190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.695198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.705213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.705231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.705239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.714617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.714635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.714642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.723178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.723197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.723207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.732732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.732752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.732760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.741986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.742004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.742012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.751442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.751460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.751468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.761560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.761579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.761587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.769930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.769949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.769956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.780149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.780172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.780181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.825 [2024-05-15 08:40:05.789457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.825 [2024-05-15 08:40:05.789476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.825 [2024-05-15 08:40:05.789484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.826 [2024-05-15 08:40:05.799149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.826 [2024-05-15 08:40:05.799172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.826 [2024-05-15 08:40:05.799180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.826 [2024-05-15 08:40:05.808162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.826 [2024-05-15 08:40:05.808188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.826 [2024-05-15 08:40:05.808196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.826 [2024-05-15 08:40:05.817706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.826 [2024-05-15 08:40:05.817725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.826 [2024-05-15 08:40:05.817732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.826 [2024-05-15 08:40:05.825639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.826 [2024-05-15 08:40:05.825658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.826 [2024-05-15 08:40:05.825666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.826 [2024-05-15 08:40:05.836431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.826 [2024-05-15 08:40:05.836450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.826 [2024-05-15 08:40:05.836457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.826 [2024-05-15 08:40:05.847162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:18.826 [2024-05-15 08:40:05.847186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.826 [2024-05-15 08:40:05.847194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.084 [2024-05-15 08:40:05.855123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.084 [2024-05-15 08:40:05.855141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.084 [2024-05-15 08:40:05.855148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.084 [2024-05-15 08:40:05.864808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.084 [2024-05-15 08:40:05.864826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.084 [2024-05-15 08:40:05.864834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.084 [2024-05-15 08:40:05.875345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.084 [2024-05-15 08:40:05.875363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.084 [2024-05-15 08:40:05.875371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.084 [2024-05-15 08:40:05.883897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.084 [2024-05-15 08:40:05.883915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.084 [2024-05-15 08:40:05.883922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.084 [2024-05-15 08:40:05.892830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.084 [2024-05-15 08:40:05.892848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.084 [2024-05-15 08:40:05.892856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.084 [2024-05-15 08:40:05.903594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.084 [2024-05-15 08:40:05.903616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.084 [2024-05-15 08:40:05.903624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.084 [2024-05-15 08:40:05.916479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.084 [2024-05-15 08:40:05.916499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.084 [2024-05-15 08:40:05.916507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.084 [2024-05-15 08:40:05.927028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.084 [2024-05-15 08:40:05.927047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.084 [2024-05-15 08:40:05.927055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.084 [2024-05-15 08:40:05.939636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.084 [2024-05-15 08:40:05.939655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.084 [2024-05-15 08:40:05.939663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.084 [2024-05-15 08:40:05.950695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.084 [2024-05-15 08:40:05.950714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.084 [2024-05-15 08:40:05.950722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.084 [2024-05-15 08:40:05.958684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.084 [2024-05-15 08:40:05.958704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.085 [2024-05-15 08:40:05.958711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.085 [2024-05-15 08:40:05.968499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.085 [2024-05-15 08:40:05.968518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.085 [2024-05-15 08:40:05.968525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.085 [2024-05-15 08:40:05.978292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.085 [2024-05-15 08:40:05.978310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.085 [2024-05-15 08:40:05.978321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.085 [2024-05-15 08:40:05.987834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.085 [2024-05-15 08:40:05.987853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.085 [2024-05-15 08:40:05.987861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.085 [2024-05-15 08:40:05.996517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.085 [2024-05-15 08:40:05.996535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.085 [2024-05-15 08:40:05.996543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.085 [2024-05-15 08:40:06.006670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.085 [2024-05-15 08:40:06.006689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.085 [2024-05-15 08:40:06.006696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.085 [2024-05-15 08:40:06.017157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.085 [2024-05-15 08:40:06.017182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.085 [2024-05-15 08:40:06.017190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.085 [2024-05-15 08:40:06.028934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.085 [2024-05-15 08:40:06.028953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.085 [2024-05-15 08:40:06.028961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.085 [2024-05-15 08:40:06.037718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.085 [2024-05-15 08:40:06.037736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.085 [2024-05-15 08:40:06.037744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.085 [2024-05-15 08:40:06.050964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.085 [2024-05-15 08:40:06.050982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.085 [2024-05-15 08:40:06.050990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.085 [2024-05-15 08:40:06.059887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.085 [2024-05-15 08:40:06.059906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.085 [2024-05-15 08:40:06.059914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.085 [2024-05-15 08:40:06.070432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.085 [2024-05-15 08:40:06.070451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.085 [2024-05-15 08:40:06.070459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.085 [2024-05-15 08:40:06.080317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.085 [2024-05-15 08:40:06.080335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.085 [2024-05-15 08:40:06.080343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.085 [2024-05-15 08:40:06.088795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.085 [2024-05-15 08:40:06.088814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.085 [2024-05-15 08:40:06.088821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.085 [2024-05-15 08:40:06.099345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.085 [2024-05-15 08:40:06.099364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.085 [2024-05-15 08:40:06.099372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.110940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.110958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.110966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.120813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.120832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.120840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.129287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.129306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.129314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.140949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.140969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.140977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.152442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.152463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.152474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.163550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.163569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.163576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.173105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.173124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.173132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.181247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.181265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.181274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.190328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.190347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.190355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.200334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.200353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.200361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.209893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.209911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.209919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.218251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.218270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.218278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.228599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.228618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.228626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.238151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.238178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.238187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.247212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.247230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.247239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.257196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.257216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.257223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.268294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.268315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.268323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.277435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.277455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.277463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.286010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.286030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.286038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.297025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.297043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.297051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.309582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.309602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.309609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.319782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.319801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.319809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.329055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.329075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.329083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 [2024-05-15 08:40:06.339876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200d910) 00:28:19.344 [2024-05-15 08:40:06.339895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.344 [2024-05-15 08:40:06.339903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.344 00:28:19.344 Latency(us) 00:28:19.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.344 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:19.344 nvme0n1 : 2.00 25611.20 100.04 0.00 0.00 4992.27 2649.93 18236.10 00:28:19.344 =================================================================================================================== 00:28:19.344 Total : 25611.20 100.04 0.00 0.00 4992.27 2649.93 18236.10 00:28:19.344 0 00:28:19.344 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:19.345 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:19.345 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:19.345 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:19.345 | .driver_specific 00:28:19.345 | .nvme_error 00:28:19.345 | .status_code 00:28:19.345 | .command_transient_transport_error' 00:28:19.602 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 201 > 0 )) 00:28:19.602 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 440899 00:28:19.602 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 440899 ']' 00:28:19.602 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 440899 00:28:19.602 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:28:19.602 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:19.602 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 440899 00:28:19.602 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:19.602 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:19.603 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 440899' 00:28:19.603 killing process with pid 440899 00:28:19.603 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 440899 00:28:19.603 Received shutdown signal, test time was about 2.000000 seconds 00:28:19.603 00:28:19.603 Latency(us) 00:28:19.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.603 =================================================================================================================== 00:28:19.603 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:19.603 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 440899 00:28:19.861 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:19.861 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:19.861 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:19.861 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:19.861 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:19.861 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=441499 00:28:19.861 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 441499 /var/tmp/bperf.sock 00:28:19.861 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 441499 ']' 00:28:19.861 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:19.861 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:19.861 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:19.861 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:19.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:19.861 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:19.861 08:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.861 [2024-05-15 08:40:06.820401] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:28:19.861 [2024-05-15 08:40:06.820448] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441499 ] 00:28:19.861 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:19.861 Zero copy mechanism will not be used. 00:28:19.861 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.861 [2024-05-15 08:40:06.873064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.119 [2024-05-15 08:40:06.952547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.684 08:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:20.684 08:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:28:20.684 08:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:20.684 08:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:20.942 08:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:20.942 08:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.942 08:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.942 08:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.942 08:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.942 08:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.200 nvme0n1 00:28:21.200 08:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:21.200 08:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.200 08:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.200 08:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.200 08:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:21.200 08:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:21.200 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:21.200 Zero copy mechanism will not be used. 00:28:21.200 Running I/O for 2 seconds... 00:28:21.459 [2024-05-15 08:40:08.229240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.459 [2024-05-15 08:40:08.229272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.459 [2024-05-15 08:40:08.229282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.459 [2024-05-15 08:40:08.235870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.459 [2024-05-15 08:40:08.235894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.459 [2024-05-15 08:40:08.235903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.459 [2024-05-15 08:40:08.243652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.459 [2024-05-15 08:40:08.243677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.459 [2024-05-15 08:40:08.243686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.459 [2024-05-15 08:40:08.252000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.459 [2024-05-15 08:40:08.252023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.459 [2024-05-15 08:40:08.252032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.459 [2024-05-15 08:40:08.260081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.459 [2024-05-15 08:40:08.260107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.459 [2024-05-15 08:40:08.260116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.459 [2024-05-15 08:40:08.268315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.459 [2024-05-15 08:40:08.268337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.459 [2024-05-15 08:40:08.268345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.459 [2024-05-15 08:40:08.276478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.459 [2024-05-15 08:40:08.276500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.459 [2024-05-15 08:40:08.276509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.459 [2024-05-15 08:40:08.284667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.459 [2024-05-15 08:40:08.284694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.459 [2024-05-15 08:40:08.284702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.459 [2024-05-15 08:40:08.292567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.459 [2024-05-15 08:40:08.292588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.459 [2024-05-15 08:40:08.292597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.459 [2024-05-15 08:40:08.301298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.459 [2024-05-15 08:40:08.301320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.459 [2024-05-15 08:40:08.301328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.459 [2024-05-15 08:40:08.309657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.459 [2024-05-15 08:40:08.309679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.459 [2024-05-15 08:40:08.309688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.459 [2024-05-15 08:40:08.318120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.459 [2024-05-15 08:40:08.318143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.459 [2024-05-15 08:40:08.318152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.459 [2024-05-15 08:40:08.326407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.459 [2024-05-15 08:40:08.326430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.459 [2024-05-15 08:40:08.326439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.459 [2024-05-15 08:40:08.335385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.459 [2024-05-15 08:40:08.335407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.459 [2024-05-15 08:40:08.335416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.459 [2024-05-15 08:40:08.343486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.459 [2024-05-15 08:40:08.343508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.459 [2024-05-15 08:40:08.343517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.459 [2024-05-15 08:40:08.351279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.459 [2024-05-15 08:40:08.351299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.459 [2024-05-15 08:40:08.351307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.460 [2024-05-15 08:40:08.358517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.460 [2024-05-15 08:40:08.358539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.460 [2024-05-15 08:40:08.358547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.460 [2024-05-15 08:40:08.364812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.460 [2024-05-15 08:40:08.364833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.460 [2024-05-15 08:40:08.364841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.460 [2024-05-15 08:40:08.371134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.460 [2024-05-15 08:40:08.371160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.460 [2024-05-15 08:40:08.371174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.460 [2024-05-15 08:40:08.378332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.460 [2024-05-15 08:40:08.378354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.460 [2024-05-15 08:40:08.378362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.460 [2024-05-15 08:40:08.386039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.460 [2024-05-15 08:40:08.386061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.460 [2024-05-15 08:40:08.386069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.460 [2024-05-15 08:40:08.392256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.460 [2024-05-15 08:40:08.392278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.460 [2024-05-15 08:40:08.392286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.460 [2024-05-15 08:40:08.398944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.460 [2024-05-15 08:40:08.398965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.460 [2024-05-15 08:40:08.398973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.460 [2024-05-15 08:40:08.406265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.460 [2024-05-15 08:40:08.406286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.460 [2024-05-15 08:40:08.406295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.460 [2024-05-15 08:40:08.414182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.460 [2024-05-15 08:40:08.414202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.460 [2024-05-15 08:40:08.414213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.460 [2024-05-15 08:40:08.421837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.460 [2024-05-15 08:40:08.421858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.460 [2024-05-15 08:40:08.421867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.460 [2024-05-15 08:40:08.429565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.460 [2024-05-15 08:40:08.429587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.460 [2024-05-15 08:40:08.429595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.460 [2024-05-15 08:40:08.437472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.460 [2024-05-15 08:40:08.437493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.460 [2024-05-15 08:40:08.437502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.460 [2024-05-15 08:40:08.445297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.460 [2024-05-15 08:40:08.445318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.460 [2024-05-15 08:40:08.445326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.460 [2024-05-15 08:40:08.453001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.460 [2024-05-15 08:40:08.453022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.460 [2024-05-15 08:40:08.453030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.460 [2024-05-15 08:40:08.460794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.460 [2024-05-15 08:40:08.460816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.460 [2024-05-15 08:40:08.460824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.460 [2024-05-15 08:40:08.467953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.460 [2024-05-15 08:40:08.467974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.460 [2024-05-15 08:40:08.467981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.460 [2024-05-15 08:40:08.476017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.460 [2024-05-15 08:40:08.476038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.460 [2024-05-15 08:40:08.476046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.719 [2024-05-15 08:40:08.484575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.719 [2024-05-15 08:40:08.484601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.719 [2024-05-15 08:40:08.484609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.719 [2024-05-15 08:40:08.492722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.719 [2024-05-15 08:40:08.492743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.719 [2024-05-15 08:40:08.492752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.719 [2024-05-15 08:40:08.500750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.719 [2024-05-15 08:40:08.500772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.719 [2024-05-15 08:40:08.500780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.719 [2024-05-15 08:40:08.508942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.719 [2024-05-15 08:40:08.508963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.719 [2024-05-15 08:40:08.508971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.719 [2024-05-15 08:40:08.516940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.719 [2024-05-15 08:40:08.516962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.719 [2024-05-15 08:40:08.516970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.719 [2024-05-15 08:40:08.523888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.719 [2024-05-15 08:40:08.523909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.719 [2024-05-15 08:40:08.523917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.531479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.531500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.531508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.539001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.539022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.539030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.547382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.547403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.547411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.555345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.555366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.555374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.563089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.563110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.563118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.571250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.571271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.571280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.579336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.579357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.579365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.586531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.586551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.586559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.593726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.593747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.593755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.600073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.600094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.600101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.606077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.606097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.606104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.612078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.612098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.612109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.618154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.618181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.618189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.624020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.624040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.624048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.629856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.629876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.629884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.635483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.635503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.635511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.641045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.641065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.641072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.646693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.646713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.646720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.652333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.652353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.652361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.657923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.657943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.657950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.663609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.663629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.663637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.669213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.669233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.669241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.675006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.720 [2024-05-15 08:40:08.675026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.720 [2024-05-15 08:40:08.675033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.720 [2024-05-15 08:40:08.680770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.721 [2024-05-15 08:40:08.680791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.721 [2024-05-15 08:40:08.680798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.721 [2024-05-15 08:40:08.686360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.721 [2024-05-15 08:40:08.686381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.721 [2024-05-15 08:40:08.686388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.721 [2024-05-15 08:40:08.692070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.721 [2024-05-15 08:40:08.692090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.721 [2024-05-15 08:40:08.692097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.721 [2024-05-15 08:40:08.697769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.721 [2024-05-15 08:40:08.697789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.721 [2024-05-15 08:40:08.697797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.721 [2024-05-15 08:40:08.703407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.721 [2024-05-15 08:40:08.703427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.721 [2024-05-15 08:40:08.703435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.721 [2024-05-15 08:40:08.708978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.721 [2024-05-15 08:40:08.708998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.721 [2024-05-15 08:40:08.709009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.721 [2024-05-15 08:40:08.714621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.721 [2024-05-15 08:40:08.714641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.721 [2024-05-15 08:40:08.714649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.721 [2024-05-15 08:40:08.720235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.721 [2024-05-15 08:40:08.720255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.721 [2024-05-15 08:40:08.720263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.721 [2024-05-15 08:40:08.725813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.721 [2024-05-15 08:40:08.725832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.721 [2024-05-15 08:40:08.725840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.721 [2024-05-15 08:40:08.731419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.721 [2024-05-15 08:40:08.731439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.721 [2024-05-15 08:40:08.731446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.721 [2024-05-15 08:40:08.737047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.721 [2024-05-15 08:40:08.737067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.721 [2024-05-15 08:40:08.737075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.742678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.742700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.742708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.748428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.748450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.748458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.754077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.754096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.754104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.759631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.759655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.759664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.765322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.765343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.765351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.771059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.771080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.771088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.776718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.776739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.776748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.782477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.782497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.782504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.788257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.788278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.788285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.793974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.793994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.794002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.799563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.799583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.799590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.805095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.805115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.805123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.810681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.810701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.810709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.816404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.816424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.816432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.822110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.822131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.822138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.827647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.827667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.827675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.833319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.833339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.833346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.839133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.839153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.839160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.844822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.844842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.844850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.851274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.851294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.851303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.858398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.981 [2024-05-15 08:40:08.858419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.981 [2024-05-15 08:40:08.858430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.981 [2024-05-15 08:40:08.865866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.982 [2024-05-15 08:40:08.865887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.982 [2024-05-15 08:40:08.865895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.982 [2024-05-15 08:40:08.873650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.982 [2024-05-15 08:40:08.873671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.982 [2024-05-15 08:40:08.873679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.982 [2024-05-15 08:40:08.881035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.982 [2024-05-15 08:40:08.881057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.982 [2024-05-15 08:40:08.881065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.982 [2024-05-15 08:40:08.888908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.982 [2024-05-15 08:40:08.888929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.982 [2024-05-15 08:40:08.888937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.982 [2024-05-15 08:40:08.896407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.982 [2024-05-15 08:40:08.896429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.982 [2024-05-15 08:40:08.896437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.982 [2024-05-15 08:40:08.904217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.982 [2024-05-15 08:40:08.904238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.982 [2024-05-15 08:40:08.904246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.982 [2024-05-15 08:40:08.911665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.982 [2024-05-15 08:40:08.911686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.982 [2024-05-15 08:40:08.911694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.982 [2024-05-15 08:40:08.919433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.982 [2024-05-15 08:40:08.919454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.982 [2024-05-15 08:40:08.919462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.982 [2024-05-15 08:40:08.927078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.982 [2024-05-15 08:40:08.927103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.982 [2024-05-15 08:40:08.927111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.982 [2024-05-15 08:40:08.934692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.982 [2024-05-15 08:40:08.934713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.982 [2024-05-15 08:40:08.934721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.982 [2024-05-15 08:40:08.942766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.982 [2024-05-15 08:40:08.942787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.982 [2024-05-15 08:40:08.942795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.982 [2024-05-15 08:40:08.950195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.982 [2024-05-15 08:40:08.950216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.982 [2024-05-15 08:40:08.950224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.982 [2024-05-15 08:40:08.957732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.982 [2024-05-15 08:40:08.957753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.982 [2024-05-15 08:40:08.957762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.982 [2024-05-15 08:40:08.965752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.982 [2024-05-15 08:40:08.965773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.982 [2024-05-15 08:40:08.965780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.982 [2024-05-15 08:40:08.973443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.982 [2024-05-15 08:40:08.973464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.982 [2024-05-15 08:40:08.973472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.982 [2024-05-15 08:40:08.981379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.982 [2024-05-15 08:40:08.981400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.982 [2024-05-15 08:40:08.981408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.982 [2024-05-15 08:40:08.989078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.982 [2024-05-15 08:40:08.989100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.982 [2024-05-15 08:40:08.989110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.982 [2024-05-15 08:40:08.996946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:21.982 [2024-05-15 08:40:08.996968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.982 [2024-05-15 08:40:08.996976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.242 [2024-05-15 08:40:09.005066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.242 [2024-05-15 08:40:09.005088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.242 [2024-05-15 08:40:09.005096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.242 [2024-05-15 08:40:09.012321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.242 [2024-05-15 08:40:09.012342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.242 [2024-05-15 08:40:09.012350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.242 [2024-05-15 08:40:09.019490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.242 [2024-05-15 08:40:09.019510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.242 [2024-05-15 08:40:09.019518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.242 [2024-05-15 08:40:09.026325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.242 [2024-05-15 08:40:09.026346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.242 [2024-05-15 08:40:09.026353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.242 [2024-05-15 08:40:09.032975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.242 [2024-05-15 08:40:09.032996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.242 [2024-05-15 08:40:09.033004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.242 [2024-05-15 08:40:09.039479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.242 [2024-05-15 08:40:09.039500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.242 [2024-05-15 08:40:09.039508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.242 [2024-05-15 08:40:09.045748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.242 [2024-05-15 08:40:09.045768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.242 [2024-05-15 08:40:09.045777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.242 [2024-05-15 08:40:09.051827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.242 [2024-05-15 08:40:09.051851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.242 [2024-05-15 08:40:09.051859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.242 [2024-05-15 08:40:09.057975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.242 [2024-05-15 08:40:09.057996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.242 [2024-05-15 08:40:09.058004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.242 [2024-05-15 08:40:09.061792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.242 [2024-05-15 08:40:09.061812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.242 [2024-05-15 08:40:09.061820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.066796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.066817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.066826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.072557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.072578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.072585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.078401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.078421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.078429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.084082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.084102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.084110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.089825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.089846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.089854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.095730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.095749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.095757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.101638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.101659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.101667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.108995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.109016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.109024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.115281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.115301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.115309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.120889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.120909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.120917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.127088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.127108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.127116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.133105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.133125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.133133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.139095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.139116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.139123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.144462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.144483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.144491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.150340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.150360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.150372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.156159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.156184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.156191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.161907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.161927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.161935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.167651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.167670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.167678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.170866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.170885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.170893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.176520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.176539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.176547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.182162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.182186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.182194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.187860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.187879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.187886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.193314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.193334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.193342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.199048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.199072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.199079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.204791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.204811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.204819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.210566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.243 [2024-05-15 08:40:09.210586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.243 [2024-05-15 08:40:09.210594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.243 [2024-05-15 08:40:09.215818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.244 [2024-05-15 08:40:09.215838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.244 [2024-05-15 08:40:09.215846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.244 [2024-05-15 08:40:09.222465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.244 [2024-05-15 08:40:09.222486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.244 [2024-05-15 08:40:09.222494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.244 [2024-05-15 08:40:09.228837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.244 [2024-05-15 08:40:09.228858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.244 [2024-05-15 08:40:09.228867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.244 [2024-05-15 08:40:09.235508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.244 [2024-05-15 08:40:09.235530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.244 [2024-05-15 08:40:09.235538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.244 [2024-05-15 08:40:09.241625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.244 [2024-05-15 08:40:09.241646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.244 [2024-05-15 08:40:09.241654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.244 [2024-05-15 08:40:09.247945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.244 [2024-05-15 08:40:09.247966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.244 [2024-05-15 08:40:09.247974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.244 [2024-05-15 08:40:09.253198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.244 [2024-05-15 08:40:09.253219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.244 [2024-05-15 08:40:09.253227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.244 [2024-05-15 08:40:09.258936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.244 [2024-05-15 08:40:09.258956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.244 [2024-05-15 08:40:09.258964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.244 [2024-05-15 08:40:09.264526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.503 [2024-05-15 08:40:09.264546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.503 [2024-05-15 08:40:09.264554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.503 [2024-05-15 08:40:09.270093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.503 [2024-05-15 08:40:09.270114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.503 [2024-05-15 08:40:09.270122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.503 [2024-05-15 08:40:09.275670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.503 [2024-05-15 08:40:09.275690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.503 [2024-05-15 08:40:09.275697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.503 [2024-05-15 08:40:09.281451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.503 [2024-05-15 08:40:09.281472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.503 [2024-05-15 08:40:09.281480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.503 [2024-05-15 08:40:09.287218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.503 [2024-05-15 08:40:09.287238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.503 [2024-05-15 08:40:09.287245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.503 [2024-05-15 08:40:09.292761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.503 [2024-05-15 08:40:09.292781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.503 [2024-05-15 08:40:09.292789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.503 [2024-05-15 08:40:09.298404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.503 [2024-05-15 08:40:09.298423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.503 [2024-05-15 08:40:09.298434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.503 [2024-05-15 08:40:09.304302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.503 [2024-05-15 08:40:09.304324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.503 [2024-05-15 08:40:09.304332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.503 [2024-05-15 08:40:09.310546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.503 [2024-05-15 08:40:09.310566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.503 [2024-05-15 08:40:09.310573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.503 [2024-05-15 08:40:09.316296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.503 [2024-05-15 08:40:09.316317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.503 [2024-05-15 08:40:09.316324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.503 [2024-05-15 08:40:09.323058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.503 [2024-05-15 08:40:09.323079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.503 [2024-05-15 08:40:09.323086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.503 [2024-05-15 08:40:09.329502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.503 [2024-05-15 08:40:09.329525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.503 [2024-05-15 08:40:09.329533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.503 [2024-05-15 08:40:09.335574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.503 [2024-05-15 08:40:09.335596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.503 [2024-05-15 08:40:09.335604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.503 [2024-05-15 08:40:09.342049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.503 [2024-05-15 08:40:09.342071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.503 [2024-05-15 08:40:09.342079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.503 [2024-05-15 08:40:09.349084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.349104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.349112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.356555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.356577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.356585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.365062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.365082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.365091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.372799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.372821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.372828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.380230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.380250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.380258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.387720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.387741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.387749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.395226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.395248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.395256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.402621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.402644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.402652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.410613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.410634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.410641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.418382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.418404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.418418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.426548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.426570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.426578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.434819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.434841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.434850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.443721] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.443744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.443753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.451633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.451655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.451664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.459321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.459342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.459351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.463871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.463891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.463899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.472079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.472100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.472109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.479340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.479362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.479370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.486498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.486523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.486531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.494574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.494595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.494603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.501945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.501966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.501974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.509929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.509951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.504 [2024-05-15 08:40:09.509959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.504 [2024-05-15 08:40:09.518375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.504 [2024-05-15 08:40:09.518396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.505 [2024-05-15 08:40:09.518404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.526445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.526465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.526474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.533495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.533516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.533524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.540993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.541014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.541022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.548650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.548671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.548679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.556591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.556612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.556620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.564748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.564769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.564777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.572989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.573010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.573018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.581186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.581207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.581215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.588594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.588615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.588623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.596700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.596722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.596730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.604479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.604501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.604508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.611705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.611727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.611735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.618778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.618798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.618810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.625354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.625374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.625383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.632047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.632069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.632077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.638445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.638465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.638473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.644729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.644749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.644757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.650796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.650816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.650824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.656886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.656907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.656914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.662556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.662576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.662583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.668288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.668309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.668316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.674110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.674135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.674143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.679775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.679796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.679803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.765 [2024-05-15 08:40:09.685392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.765 [2024-05-15 08:40:09.685413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.765 [2024-05-15 08:40:09.685420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.766 [2024-05-15 08:40:09.691114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.766 [2024-05-15 08:40:09.691135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.766 [2024-05-15 08:40:09.691142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.766 [2024-05-15 08:40:09.696947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.766 [2024-05-15 08:40:09.696968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.766 [2024-05-15 08:40:09.696976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.766 [2024-05-15 08:40:09.702297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.766 [2024-05-15 08:40:09.702317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.766 [2024-05-15 08:40:09.702325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.766 [2024-05-15 08:40:09.707862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.766 [2024-05-15 08:40:09.707882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.766 [2024-05-15 08:40:09.707890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.766 [2024-05-15 08:40:09.713289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.766 [2024-05-15 08:40:09.713310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.766 [2024-05-15 08:40:09.713318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.766 [2024-05-15 08:40:09.718863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.766 [2024-05-15 08:40:09.718884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.766 [2024-05-15 08:40:09.718891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.766 [2024-05-15 08:40:09.724609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.766 [2024-05-15 08:40:09.724630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.766 [2024-05-15 08:40:09.724637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.766 [2024-05-15 08:40:09.730503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.766 [2024-05-15 08:40:09.730524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.766 [2024-05-15 08:40:09.730532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.766 [2024-05-15 08:40:09.736000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.766 [2024-05-15 08:40:09.736020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.766 [2024-05-15 08:40:09.736028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.766 [2024-05-15 08:40:09.741522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.766 [2024-05-15 08:40:09.741542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.766 [2024-05-15 08:40:09.741550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.766 [2024-05-15 08:40:09.747295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.766 [2024-05-15 08:40:09.747316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.766 [2024-05-15 08:40:09.747323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.766 [2024-05-15 08:40:09.753080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.766 [2024-05-15 08:40:09.753101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.766 [2024-05-15 08:40:09.753109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.766 [2024-05-15 08:40:09.758545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.766 [2024-05-15 08:40:09.758566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.766 [2024-05-15 08:40:09.758574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.766 [2024-05-15 08:40:09.764329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.766 [2024-05-15 08:40:09.764350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.766 [2024-05-15 08:40:09.764358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.766 [2024-05-15 08:40:09.769618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.766 [2024-05-15 08:40:09.769638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.766 [2024-05-15 08:40:09.769651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.766 [2024-05-15 08:40:09.773107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.766 [2024-05-15 08:40:09.773127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.766 [2024-05-15 08:40:09.773134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.766 [2024-05-15 08:40:09.778690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.766 [2024-05-15 08:40:09.778710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.766 [2024-05-15 08:40:09.778718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.766 [2024-05-15 08:40:09.784927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:22.766 [2024-05-15 08:40:09.784946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.766 [2024-05-15 08:40:09.784954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.026 [2024-05-15 08:40:09.789823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.026 [2024-05-15 08:40:09.789842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.026 [2024-05-15 08:40:09.789850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.026 [2024-05-15 08:40:09.795275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.026 [2024-05-15 08:40:09.795295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.026 [2024-05-15 08:40:09.795303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.800831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.800851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.800859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.806247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.806267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.806274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.811588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.811608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.811616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.817173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.817196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.817204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.822728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.822747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.822756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.828648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.828666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.828674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.833972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.833992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.833999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.839214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.839235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.839243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.844953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.844972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.844980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.850520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.850540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.850547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.856584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.856604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.856612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.863203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.863222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.863230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.869717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.869737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.869746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.876141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.876161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.876174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.881983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.882003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.882010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.887746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.887767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.887775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.892773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.892793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.892801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.898950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.898970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.898978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.905364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.905383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.905391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.911339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.911359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.911367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.917538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.917564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.917571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.923879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.923899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.923907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.930511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.930531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.930539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.937039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.027 [2024-05-15 08:40:09.937060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.027 [2024-05-15 08:40:09.937067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.027 [2024-05-15 08:40:09.943638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.028 [2024-05-15 08:40:09.943658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.028 [2024-05-15 08:40:09.943666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.028 [2024-05-15 08:40:09.949870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.028 [2024-05-15 08:40:09.949890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.028 [2024-05-15 08:40:09.949897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.028 [2024-05-15 08:40:09.956286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.028 [2024-05-15 08:40:09.956306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.028 [2024-05-15 08:40:09.956313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.028 [2024-05-15 08:40:09.962313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.028 [2024-05-15 08:40:09.962333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.028 [2024-05-15 08:40:09.962341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.028 [2024-05-15 08:40:09.968372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.028 [2024-05-15 08:40:09.968392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.028 [2024-05-15 08:40:09.968400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.028 [2024-05-15 08:40:09.974360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.028 [2024-05-15 08:40:09.974380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.028 [2024-05-15 08:40:09.974387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.028 [2024-05-15 08:40:09.980539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.028 [2024-05-15 08:40:09.980559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.028 [2024-05-15 08:40:09.980567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.028 [2024-05-15 08:40:09.988288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.028 [2024-05-15 08:40:09.988310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.028 [2024-05-15 08:40:09.988318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.028 [2024-05-15 08:40:09.996814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.028 [2024-05-15 08:40:09.996835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.028 [2024-05-15 08:40:09.996843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.028 [2024-05-15 08:40:10.005782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.028 [2024-05-15 08:40:10.005805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.028 [2024-05-15 08:40:10.005813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.028 [2024-05-15 08:40:10.014425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.028 [2024-05-15 08:40:10.014447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.028 [2024-05-15 08:40:10.014456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.028 [2024-05-15 08:40:10.021643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.028 [2024-05-15 08:40:10.021664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.028 [2024-05-15 08:40:10.021672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.028 [2024-05-15 08:40:10.028898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.028 [2024-05-15 08:40:10.028920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.028 [2024-05-15 08:40:10.028928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.028 [2024-05-15 08:40:10.036825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.028 [2024-05-15 08:40:10.036847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.028 [2024-05-15 08:40:10.036860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.028 [2024-05-15 08:40:10.042412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.028 [2024-05-15 08:40:10.042432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.028 [2024-05-15 08:40:10.042441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.028 [2024-05-15 08:40:10.045786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.028 [2024-05-15 08:40:10.045806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.028 [2024-05-15 08:40:10.045814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.052090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.052112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.288 [2024-05-15 08:40:10.052121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.058023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.058043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.288 [2024-05-15 08:40:10.058052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.063733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.063754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.288 [2024-05-15 08:40:10.063762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.069654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.069674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.288 [2024-05-15 08:40:10.069682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.075688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.075709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.288 [2024-05-15 08:40:10.075717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.081684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.081703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.288 [2024-05-15 08:40:10.081711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.087575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.087599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.288 [2024-05-15 08:40:10.087607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.093357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.093376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.288 [2024-05-15 08:40:10.093385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.098896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.098915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.288 [2024-05-15 08:40:10.098924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.104322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.104343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.288 [2024-05-15 08:40:10.104351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.109927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.109946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.288 [2024-05-15 08:40:10.109955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.115335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.115355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.288 [2024-05-15 08:40:10.115363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.120344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.120365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.288 [2024-05-15 08:40:10.120373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.125749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.125769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.288 [2024-05-15 08:40:10.125777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.132232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.132252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.288 [2024-05-15 08:40:10.132260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.138844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.138864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.288 [2024-05-15 08:40:10.138872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.145441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.145462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.288 [2024-05-15 08:40:10.145469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.151764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.151784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.288 [2024-05-15 08:40:10.151791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.288 [2024-05-15 08:40:10.157978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.288 [2024-05-15 08:40:10.157998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.289 [2024-05-15 08:40:10.158005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.289 [2024-05-15 08:40:10.163882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.289 [2024-05-15 08:40:10.163902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.289 [2024-05-15 08:40:10.163910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.289 [2024-05-15 08:40:10.170114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.289 [2024-05-15 08:40:10.170134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.289 [2024-05-15 08:40:10.170142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.289 [2024-05-15 08:40:10.176702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.289 [2024-05-15 08:40:10.176722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.289 [2024-05-15 08:40:10.176730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.289 [2024-05-15 08:40:10.183032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.289 [2024-05-15 08:40:10.183052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.289 [2024-05-15 08:40:10.183059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.289 [2024-05-15 08:40:10.189322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.289 [2024-05-15 08:40:10.189342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.289 [2024-05-15 08:40:10.189353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.289 [2024-05-15 08:40:10.195377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.289 [2024-05-15 08:40:10.195397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.289 [2024-05-15 08:40:10.195405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.289 [2024-05-15 08:40:10.201254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.289 [2024-05-15 08:40:10.201274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.289 [2024-05-15 08:40:10.201282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.289 [2024-05-15 08:40:10.207240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.289 [2024-05-15 08:40:10.207259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.289 [2024-05-15 08:40:10.207268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.289 [2024-05-15 08:40:10.213705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.289 [2024-05-15 08:40:10.213726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.289 [2024-05-15 08:40:10.213734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.289 [2024-05-15 08:40:10.220207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.289 [2024-05-15 08:40:10.220227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.289 [2024-05-15 08:40:10.220235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.289 [2024-05-15 08:40:10.226634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1772200) 00:28:23.289 [2024-05-15 08:40:10.226654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.289 [2024-05-15 08:40:10.226663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.289 00:28:23.289 Latency(us) 00:28:23.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.289 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:23.289 nvme0n1 : 2.00 4755.43 594.43 0.00 0.00 3361.01 541.38 10029.86 00:28:23.289 =================================================================================================================== 00:28:23.289 Total : 4755.43 594.43 0.00 0.00 3361.01 541.38 10029.86 00:28:23.289 0 00:28:23.289 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:23.289 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:23.289 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:23.289 | .driver_specific 00:28:23.289 | .nvme_error 00:28:23.289 | .status_code 00:28:23.289 | .command_transient_transport_error' 00:28:23.289 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:23.548 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 307 > 0 )) 00:28:23.548 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 441499 00:28:23.548 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 441499 ']' 00:28:23.548 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 441499 00:28:23.548 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:28:23.548 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:23.548 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 441499 00:28:23.548 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:23.548 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:23.548 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 441499' 00:28:23.548 killing process with pid 441499 00:28:23.548 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 441499 00:28:23.548 Received shutdown signal, test time was about 2.000000 seconds 00:28:23.548 00:28:23.548 Latency(us) 00:28:23.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.548 =================================================================================================================== 00:28:23.548 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.548 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 441499 00:28:23.807 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:23.807 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:23.807 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:23.807 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:23.807 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:23.807 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=442082 00:28:23.807 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 442082 /var/tmp/bperf.sock 00:28:23.807 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 442082 ']' 00:28:23.807 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:23.807 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:23.807 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:23.807 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:23.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:23.807 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:23.807 08:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:23.807 [2024-05-15 08:40:10.705235] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:28:23.807 [2024-05-15 08:40:10.705280] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442082 ] 00:28:23.807 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.807 [2024-05-15 08:40:10.759328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.065 [2024-05-15 08:40:10.838339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.630 08:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:24.630 08:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:28:24.630 08:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:24.630 08:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:24.888 08:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:24.888 08:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.888 08:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.888 08:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.888 08:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:24.888 08:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.147 nvme0n1 00:28:25.147 08:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:25.147 08:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.147 08:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.147 08:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.147 08:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:25.147 08:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:25.147 Running I/O for 2 seconds... 00:28:25.147 [2024-05-15 08:40:12.090284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ed920 00:28:25.147 [2024-05-15 08:40:12.091128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.147 [2024-05-15 08:40:12.091156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:25.147 [2024-05-15 08:40:12.100082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190fdeb0 00:28:25.147 [2024-05-15 08:40:12.101016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.147 [2024-05-15 08:40:12.101037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:25.147 [2024-05-15 08:40:12.109752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ee190 00:28:25.147 [2024-05-15 08:40:12.110848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.147 [2024-05-15 08:40:12.110870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:25.147 [2024-05-15 08:40:12.118436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e5220 00:28:25.147 [2024-05-15 08:40:12.119483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.147 [2024-05-15 08:40:12.119505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:25.147 [2024-05-15 08:40:12.126924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e7818 00:28:25.147 [2024-05-15 08:40:12.127623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.147 [2024-05-15 08:40:12.127641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:25.147 [2024-05-15 08:40:12.136197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f0788 00:28:25.147 [2024-05-15 08:40:12.136693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.147 [2024-05-15 08:40:12.136711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:25.147 [2024-05-15 08:40:12.146674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f9f68 00:28:25.147 [2024-05-15 08:40:12.147961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.147 [2024-05-15 08:40:12.147978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:25.147 [2024-05-15 08:40:12.155152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f57b0 00:28:25.147 [2024-05-15 08:40:12.156078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.147 [2024-05-15 08:40:12.156095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:25.147 [2024-05-15 08:40:12.164108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f7100 00:28:25.147 [2024-05-15 08:40:12.165040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.147 [2024-05-15 08:40:12.165058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:25.406 [2024-05-15 08:40:12.173433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f81e0 00:28:25.406 [2024-05-15 08:40:12.174418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.406 [2024-05-15 08:40:12.174435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:25.406 [2024-05-15 08:40:12.182922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e7c50 00:28:25.406 [2024-05-15 08:40:12.183662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.406 [2024-05-15 08:40:12.183680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:25.406 [2024-05-15 08:40:12.192119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ef6a8 00:28:25.406 [2024-05-15 08:40:12.193183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.406 [2024-05-15 08:40:12.193200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:25.406 [2024-05-15 08:40:12.202423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190fef90 00:28:25.406 [2024-05-15 08:40:12.203949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.406 [2024-05-15 08:40:12.203966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:25.406 [2024-05-15 08:40:12.208818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f6020 00:28:25.406 [2024-05-15 08:40:12.209539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.406 [2024-05-15 08:40:12.209557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:25.406 [2024-05-15 08:40:12.218099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e0ea0 00:28:25.406 [2024-05-15 08:40:12.218875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.406 [2024-05-15 08:40:12.218893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:25.406 [2024-05-15 08:40:12.227264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e1f80 00:28:25.406 [2024-05-15 08:40:12.227921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.406 [2024-05-15 08:40:12.227939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:25.406 [2024-05-15 08:40:12.236419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f1430 00:28:25.406 [2024-05-15 08:40:12.237025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.406 [2024-05-15 08:40:12.237043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:25.406 [2024-05-15 08:40:12.244799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e49b0 00:28:25.406 [2024-05-15 08:40:12.245484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.406 [2024-05-15 08:40:12.245501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:25.406 [2024-05-15 08:40:12.255178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f6cc8 00:28:25.406 [2024-05-15 08:40:12.256027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.406 [2024-05-15 08:40:12.256046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:25.406 [2024-05-15 08:40:12.264278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f7da8 00:28:25.406 [2024-05-15 08:40:12.265059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.406 [2024-05-15 08:40:12.265077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:25.406 [2024-05-15 08:40:12.273422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190feb58 00:28:25.406 [2024-05-15 08:40:12.274150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.406 [2024-05-15 08:40:12.274173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:25.406 [2024-05-15 08:40:12.282481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190eee38 00:28:25.406 [2024-05-15 08:40:12.283286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.406 [2024-05-15 08:40:12.283305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:25.406 [2024-05-15 08:40:12.292133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f1868 00:28:25.406 [2024-05-15 08:40:12.293121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.406 [2024-05-15 08:40:12.293139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:25.406 [2024-05-15 08:40:12.301418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f2948 00:28:25.406 [2024-05-15 08:40:12.302376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.406 [2024-05-15 08:40:12.302394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:25.406 [2024-05-15 08:40:12.310472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f6458 00:28:25.406 [2024-05-15 08:40:12.311406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.406 [2024-05-15 08:40:12.311424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:25.406 [2024-05-15 08:40:12.319809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190fb048 00:28:25.406 [2024-05-15 08:40:12.320555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.406 [2024-05-15 08:40:12.320574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.406 [2024-05-15 08:40:12.328298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f0350 00:28:25.407 [2024-05-15 08:40:12.329716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.407 [2024-05-15 08:40:12.329733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:25.407 [2024-05-15 08:40:12.336325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190de8a8 00:28:25.407 [2024-05-15 08:40:12.336944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.407 [2024-05-15 08:40:12.336963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:25.407 [2024-05-15 08:40:12.346549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190fda78 00:28:25.407 [2024-05-15 08:40:12.347394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.407 [2024-05-15 08:40:12.347413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:25.407 [2024-05-15 08:40:12.355892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190fc560 00:28:25.407 [2024-05-15 08:40:12.356734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.407 [2024-05-15 08:40:12.356756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:25.407 [2024-05-15 08:40:12.365137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e9e10 00:28:25.407 [2024-05-15 08:40:12.365885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.407 [2024-05-15 08:40:12.365904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:25.407 [2024-05-15 08:40:12.374400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f7538 00:28:25.407 [2024-05-15 08:40:12.375131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.407 [2024-05-15 08:40:12.375149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:25.407 [2024-05-15 08:40:12.383590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f7538 00:28:25.407 [2024-05-15 08:40:12.384431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.407 [2024-05-15 08:40:12.384450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:25.407 [2024-05-15 08:40:12.392959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f7538 00:28:25.407 [2024-05-15 08:40:12.393806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.407 [2024-05-15 08:40:12.393825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:25.407 [2024-05-15 08:40:12.403472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f7538 00:28:25.407 [2024-05-15 08:40:12.404701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.407 [2024-05-15 08:40:12.404719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:25.407 [2024-05-15 08:40:12.413235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190de038 00:28:25.407 [2024-05-15 08:40:12.414675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.407 [2024-05-15 08:40:12.414693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:25.407 [2024-05-15 08:40:12.422651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f6020 00:28:25.407 [2024-05-15 08:40:12.424013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.407 [2024-05-15 08:40:12.424031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:25.665 [2024-05-15 08:40:12.429098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ef270 00:28:25.665 [2024-05-15 08:40:12.429784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.665 [2024-05-15 08:40:12.429802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:25.665 [2024-05-15 08:40:12.438801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190de8a8 00:28:25.665 [2024-05-15 08:40:12.439521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.665 [2024-05-15 08:40:12.439539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:25.665 [2024-05-15 08:40:12.448326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190df118 00:28:25.665 [2024-05-15 08:40:12.449148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.665 [2024-05-15 08:40:12.449170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.457834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f4298 00:28:25.666 [2024-05-15 08:40:12.458913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.458931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.466336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190fdeb0 00:28:25.666 [2024-05-15 08:40:12.466943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.466961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.475309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190eb328 00:28:25.666 [2024-05-15 08:40:12.475898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.475915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.484644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190df550 00:28:25.666 [2024-05-15 08:40:12.485348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.485365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.493836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f6020 00:28:25.666 [2024-05-15 08:40:12.494576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.494595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.505038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ee190 00:28:25.666 [2024-05-15 08:40:12.506566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.506584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.511542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e88f8 00:28:25.666 [2024-05-15 08:40:12.512228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.512246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.520212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190eb760 00:28:25.666 [2024-05-15 08:40:12.520786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.520804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.529774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190fdeb0 00:28:25.666 [2024-05-15 08:40:12.530474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.530492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.539324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e5220 00:28:25.666 [2024-05-15 08:40:12.540139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.540157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.548934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e5a90 00:28:25.666 [2024-05-15 08:40:12.550005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.550023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.557429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e38d0 00:28:25.666 [2024-05-15 08:40:12.558036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.558054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.566706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190fa3a0 00:28:25.666 [2024-05-15 08:40:12.567156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.567190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.576043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f1868 00:28:25.666 [2024-05-15 08:40:12.576752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.576770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.584380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e5ec8 00:28:25.666 [2024-05-15 08:40:12.585097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.585114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.593910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e5658 00:28:25.666 [2024-05-15 08:40:12.594846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.594868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.604054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f4f40 00:28:25.666 [2024-05-15 08:40:12.605048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.605067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.613380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e8d30 00:28:25.666 [2024-05-15 08:40:12.614350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.614369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.622033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ebb98 00:28:25.666 [2024-05-15 08:40:12.623062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.623080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.631633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e3060 00:28:25.666 [2024-05-15 08:40:12.632755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.632772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.641180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f8618 00:28:25.666 [2024-05-15 08:40:12.642446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.642464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.650797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190fda78 00:28:25.666 [2024-05-15 08:40:12.652179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.652197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.660323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190df550 00:28:25.666 [2024-05-15 08:40:12.661882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.661899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.666781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190eff18 00:28:25.666 [2024-05-15 08:40:12.667426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.667444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.676007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ea248 00:28:25.666 [2024-05-15 08:40:12.676626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.676644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:25.666 [2024-05-15 08:40:12.686333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190fd640 00:28:25.666 [2024-05-15 08:40:12.687497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.666 [2024-05-15 08:40:12.687515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:25.925 [2024-05-15 08:40:12.696074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f20d8 00:28:25.925 [2024-05-15 08:40:12.697380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.925 [2024-05-15 08:40:12.697398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:25.925 [2024-05-15 08:40:12.704548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e49b0 00:28:25.925 [2024-05-15 08:40:12.705385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.925 [2024-05-15 08:40:12.705403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:25.925 [2024-05-15 08:40:12.712907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ebb98 00:28:25.925 [2024-05-15 08:40:12.713832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.713849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.723232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190fd208 00:28:25.926 [2024-05-15 08:40:12.724183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.724217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.731653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190fef90 00:28:25.926 [2024-05-15 08:40:12.732992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.733010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.740108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190fa3a0 00:28:25.926 [2024-05-15 08:40:12.740701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.740719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.748673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f4f40 00:28:25.926 [2024-05-15 08:40:12.749334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.749352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.758760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f6890 00:28:25.926 [2024-05-15 08:40:12.759438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.759456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.768042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f92c0 00:28:25.926 [2024-05-15 08:40:12.768734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.768752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.776398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f1868 00:28:25.926 [2024-05-15 08:40:12.777173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.777191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.786522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f2d80 00:28:25.926 [2024-05-15 08:40:12.787305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.787323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.795000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e5a90 00:28:25.926 [2024-05-15 08:40:12.795893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.795911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.804579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f57b0 00:28:25.926 [2024-05-15 08:40:12.805613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.805630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.814136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e84c0 00:28:25.926 [2024-05-15 08:40:12.815280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.815297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.822655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e9e10 00:28:25.926 [2024-05-15 08:40:12.823305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.823323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.831889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e8d30 00:28:25.926 [2024-05-15 08:40:12.832443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.832464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.841454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f7da8 00:28:25.926 [2024-05-15 08:40:12.842102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.842120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.850716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ebb98 00:28:25.926 [2024-05-15 08:40:12.851608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.851626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.859154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f6cc8 00:28:25.926 [2024-05-15 08:40:12.860176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.860193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.869569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190de470 00:28:25.926 [2024-05-15 08:40:12.870633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.870652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.878256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190de470 00:28:25.926 [2024-05-15 08:40:12.879378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.879396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.887830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190edd58 00:28:25.926 [2024-05-15 08:40:12.889051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.889068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.897420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ebb98 00:28:25.926 [2024-05-15 08:40:12.898762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.898780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.906912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190dfdc0 00:28:25.926 [2024-05-15 08:40:12.908434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.908451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.915420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ea248 00:28:25.926 [2024-05-15 08:40:12.916492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.916510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.923735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e6300 00:28:25.926 [2024-05-15 08:40:12.925147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.925169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.931522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f1ca0 00:28:25.926 [2024-05-15 08:40:12.932283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.932301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:25.926 [2024-05-15 08:40:12.941801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e49b0 00:28:25.926 [2024-05-15 08:40:12.942587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.926 [2024-05-15 08:40:12.942605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:26.185 [2024-05-15 08:40:12.951343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ed0b0 00:28:26.185 [2024-05-15 08:40:12.951992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.185 [2024-05-15 08:40:12.952010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:26.185 [2024-05-15 08:40:12.961019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ef6a8 00:28:26.185 [2024-05-15 08:40:12.961780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.185 [2024-05-15 08:40:12.961797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.185 [2024-05-15 08:40:12.969576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190fda78 00:28:26.185 [2024-05-15 08:40:12.970930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.185 [2024-05-15 08:40:12.970947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.185 [2024-05-15 08:40:12.978020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f1ca0 00:28:26.185 [2024-05-15 08:40:12.978686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.185 [2024-05-15 08:40:12.978704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:26.185 [2024-05-15 08:40:12.987391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190eff18 00:28:26.185 [2024-05-15 08:40:12.987923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.185 [2024-05-15 08:40:12.987940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:26.185 [2024-05-15 08:40:12.998052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190e8088 00:28:26.185 [2024-05-15 08:40:12.999395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.185 [2024-05-15 08:40:12.999414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:26.185 [2024-05-15 08:40:13.006591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190f2510 00:28:26.185 [2024-05-15 08:40:13.007514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.185 [2024-05-15 08:40:13.007532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:26.185 [2024-05-15 08:40:13.015581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.185 [2024-05-15 08:40:13.015746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.015763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.025050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.025204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.025220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.034367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.034525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.034542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.043844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.044009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.044026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.053323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.053501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.053517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.062798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.062949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.062965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.072263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.072431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.072448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.081743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.081887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.081904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.091194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.091341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.091357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.100748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.100912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.100928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.110256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.110418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.110435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.119958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.120124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.120140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.129578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.129742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.129759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.139255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.139429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.139445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.148757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.148903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.148920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.158198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.158362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.158382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.167681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.167827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.167844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.177093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.177245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.177262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.186524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.186688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.186705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.196010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.196177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.196194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.186 [2024-05-15 08:40:13.205534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.186 [2024-05-15 08:40:13.205683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.186 [2024-05-15 08:40:13.205700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.444 [2024-05-15 08:40:13.215228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.444 [2024-05-15 08:40:13.215390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.444 [2024-05-15 08:40:13.215419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.444 [2024-05-15 08:40:13.224742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.444 [2024-05-15 08:40:13.224887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.444 [2024-05-15 08:40:13.224904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.444 [2024-05-15 08:40:13.234179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.234342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.234359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.243679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.243823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.243839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.253174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.253321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.253337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.262641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.262805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.262822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.272118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.272269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.272285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.281786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.281930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.281947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.291262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.291438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.291454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.300726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.300872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.300889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.310137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.310308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.310325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.319635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.319781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.319798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.329029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.329179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.329195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.338506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.338654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.338670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.348008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.348154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.348174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.357448] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.357592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.357609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.366909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.367072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.367089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.376588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.376750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.376767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.386180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.386342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.386358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.395736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.395883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.395899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.405202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.405350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.405370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.414652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.414799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.414814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.424144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.424317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.424333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.433593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.433741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.433757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.443016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.443162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.443182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.452557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.445 [2024-05-15 08:40:13.452704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.445 [2024-05-15 08:40:13.452721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.445 [2024-05-15 08:40:13.461941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.446 [2024-05-15 08:40:13.462087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.446 [2024-05-15 08:40:13.462104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.704 [2024-05-15 08:40:13.471724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.704 [2024-05-15 08:40:13.471874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.704 [2024-05-15 08:40:13.471891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.704 [2024-05-15 08:40:13.481271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.704 [2024-05-15 08:40:13.481415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.704 [2024-05-15 08:40:13.481431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.704 [2024-05-15 08:40:13.490815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.704 [2024-05-15 08:40:13.490985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.704 [2024-05-15 08:40:13.491006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.704 [2024-05-15 08:40:13.500388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.704 [2024-05-15 08:40:13.500532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.704 [2024-05-15 08:40:13.500550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.704 [2024-05-15 08:40:13.509837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.704 [2024-05-15 08:40:13.510003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.704 [2024-05-15 08:40:13.510021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.704 [2024-05-15 08:40:13.519296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.704 [2024-05-15 08:40:13.519441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.519458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.528715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.528861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.528878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.538244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.538408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.538435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.547735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.547884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.547900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.557220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.557386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.557403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.566689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.566834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.566851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.576075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.576247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.576264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.585587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.585750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.585767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.595052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.595204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.595221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.604537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.604682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.604699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.614023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.614187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.614204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.623487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.623649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.623666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.633223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.633372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.633389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.642807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.642970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.642987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.652430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.652576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.652598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.661840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.661986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.662003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.671319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.671491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.671508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.680807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.680956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.680974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.690267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.690415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.690431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.699727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.699894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.699912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.709295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.709446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.709467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.705 [2024-05-15 08:40:13.718979] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.705 [2024-05-15 08:40:13.719144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.705 [2024-05-15 08:40:13.719161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.964 [2024-05-15 08:40:13.728758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.964 [2024-05-15 08:40:13.728908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.964 [2024-05-15 08:40:13.728925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.964 [2024-05-15 08:40:13.738360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.964 [2024-05-15 08:40:13.738511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.964 [2024-05-15 08:40:13.738527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.964 [2024-05-15 08:40:13.747796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.964 [2024-05-15 08:40:13.747959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.964 [2024-05-15 08:40:13.747976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.964 [2024-05-15 08:40:13.757292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.964 [2024-05-15 08:40:13.757441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.964 [2024-05-15 08:40:13.757458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.964 [2024-05-15 08:40:13.766728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.964 [2024-05-15 08:40:13.766873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.964 [2024-05-15 08:40:13.766889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.964 [2024-05-15 08:40:13.776178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.964 [2024-05-15 08:40:13.776344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.964 [2024-05-15 08:40:13.776361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.964 [2024-05-15 08:40:13.785697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.785856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.785872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.795085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.795237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.795254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.804624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.804770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.804786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.814048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.814218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.814235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.823580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.823728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.823744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.833027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.833196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.833213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.842509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.842657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.842674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.851952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.852097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.852113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.861386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.861532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.861550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.870846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.870991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.871007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.880393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.880558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.880574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.890101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.890277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.890300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.899775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.899940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.899960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.909407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.909558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.909575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.918892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.919054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.919071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.928385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.928531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.928547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.937867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.938013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.938029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.947324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.947491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.947508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.956816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.956963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.956979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.966341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.966489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.966506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.975689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.975837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.965 [2024-05-15 08:40:13.975853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.965 [2024-05-15 08:40:13.985306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:26.965 [2024-05-15 08:40:13.985459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.966 [2024-05-15 08:40:13.985475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:27.224 [2024-05-15 08:40:13.994970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:27.224 [2024-05-15 08:40:13.995116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.224 [2024-05-15 08:40:13.995133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:27.224 [2024-05-15 08:40:14.004637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:27.224 [2024-05-15 08:40:14.004803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.224 [2024-05-15 08:40:14.004820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:27.224 [2024-05-15 08:40:14.014121] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:27.224 [2024-05-15 08:40:14.014297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.224 [2024-05-15 08:40:14.014313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:27.224 [2024-05-15 08:40:14.023564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:27.224 [2024-05-15 08:40:14.023729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.224 [2024-05-15 08:40:14.023746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:27.224 [2024-05-15 08:40:14.033080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:27.224 [2024-05-15 08:40:14.033255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.224 [2024-05-15 08:40:14.033272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:27.224 [2024-05-15 08:40:14.042584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:27.224 [2024-05-15 08:40:14.042752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.224 [2024-05-15 08:40:14.042769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:27.224 [2024-05-15 08:40:14.052060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:27.224 [2024-05-15 08:40:14.052234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.224 [2024-05-15 08:40:14.052251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:27.224 [2024-05-15 08:40:14.061583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:27.224 [2024-05-15 08:40:14.061729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.224 [2024-05-15 08:40:14.061745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:27.225 [2024-05-15 08:40:14.071008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:27.225 [2024-05-15 08:40:14.071177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.225 [2024-05-15 08:40:14.071194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:27.225 [2024-05-15 08:40:14.080450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbcd0) with pdu=0x2000190ff3c8 00:28:27.225 [2024-05-15 08:40:14.080593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.225 [2024-05-15 08:40:14.080609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:27.225 00:28:27.225 Latency(us) 00:28:27.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.225 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:27.225 nvme0n1 : 2.00 27232.66 106.38 0.00 0.00 4691.27 2393.49 15386.71 00:28:27.225 =================================================================================================================== 00:28:27.225 Total : 27232.66 106.38 0.00 0.00 4691.27 2393.49 15386.71 00:28:27.225 0 00:28:27.225 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:27.225 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:27.225 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:27.225 | .driver_specific 00:28:27.225 | .nvme_error 00:28:27.225 | .status_code 00:28:27.225 | .command_transient_transport_error' 00:28:27.225 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:27.483 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 214 > 0 )) 00:28:27.483 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 442082 00:28:27.483 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 442082 ']' 00:28:27.483 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 442082 00:28:27.483 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:28:27.483 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:27.483 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 442082 00:28:27.483 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:27.483 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:27.483 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 442082' 00:28:27.483 killing process with pid 442082 00:28:27.483 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 442082 00:28:27.483 Received shutdown signal, test time was about 2.000000 seconds 00:28:27.483 00:28:27.483 Latency(us) 00:28:27.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.483 =================================================================================================================== 00:28:27.483 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:27.483 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 442082 00:28:27.741 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:27.741 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:27.742 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:27.742 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:27.742 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:27.742 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=442779 00:28:27.742 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 442779 /var/tmp/bperf.sock 00:28:27.742 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:27.742 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 442779 ']' 00:28:27.742 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:27.742 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:27.742 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:27.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:27.742 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:27.742 08:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:27.742 [2024-05-15 08:40:14.580312] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:28:27.742 [2024-05-15 08:40:14.580361] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442779 ] 00:28:27.742 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:27.742 Zero copy mechanism will not be used. 00:28:27.742 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.742 [2024-05-15 08:40:14.635201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.742 [2024-05-15 08:40:14.702540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.675 08:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:28.675 08:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:28:28.675 08:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:28.675 08:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:28.675 08:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:28.675 08:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.675 08:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:28.675 08:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.675 08:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.675 08:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.933 nvme0n1 00:28:29.192 08:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:29.192 08:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.192 08:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.192 08:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.192 08:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:29.192 08:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:29.192 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:29.192 Zero copy mechanism will not be used. 00:28:29.192 Running I/O for 2 seconds... 00:28:29.193 [2024-05-15 08:40:16.059439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.059800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.059829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.064097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.064463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.064487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.068748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.069109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.069132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.073355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.073722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.073743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.077970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.078344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.078366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.082492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.082834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.082854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.087116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.087479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.087498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.091753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.092110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.092133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.096522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.096880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.096899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.101185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.101537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.101556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.105814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.106197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.106216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.110435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.110782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.110801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.114967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.115311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.115329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.119607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.119964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.119984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.124290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.124638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.124656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.129387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.129612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.129631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.133879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.134206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.134225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.137973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.138301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.138320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.142109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.142457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.142475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.146186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.146492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.146510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.150223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.150539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.150557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.154064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.154323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.154342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.158045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.158320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.158339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.161819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.162067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.162086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.165606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.165849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.165871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.169558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.169800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.169819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.174040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.174301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.174320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.178463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.178718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.178736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.182482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.182739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.182757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.186510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.186749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.186767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.190445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.190699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.190717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.194677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.194926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.194944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.198693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.198940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.198959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.202631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.202894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.202913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.206503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.206751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.206770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.210496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.193 [2024-05-15 08:40:16.210770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-05-15 08:40:16.210789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.193 [2024-05-15 08:40:16.215131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.453 [2024-05-15 08:40:16.215392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.453 [2024-05-15 08:40:16.215412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.453 [2024-05-15 08:40:16.219842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.453 [2024-05-15 08:40:16.220099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.453 [2024-05-15 08:40:16.220119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.453 [2024-05-15 08:40:16.224100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.453 [2024-05-15 08:40:16.224346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.453 [2024-05-15 08:40:16.224365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.453 [2024-05-15 08:40:16.228551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.453 [2024-05-15 08:40:16.228794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.453 [2024-05-15 08:40:16.228813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.453 [2024-05-15 08:40:16.233344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.453 [2024-05-15 08:40:16.233590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.453 [2024-05-15 08:40:16.233608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.453 [2024-05-15 08:40:16.237928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.453 [2024-05-15 08:40:16.238179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.453 [2024-05-15 08:40:16.238198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.453 [2024-05-15 08:40:16.242311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.453 [2024-05-15 08:40:16.242557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.453 [2024-05-15 08:40:16.242575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.453 [2024-05-15 08:40:16.246238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.453 [2024-05-15 08:40:16.246481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.453 [2024-05-15 08:40:16.246500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.453 [2024-05-15 08:40:16.250074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.250341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.250360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.254006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.254261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.254280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.258114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.258368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.258386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.262915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.263173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.263192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.266885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.267127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.267146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.270549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.270798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.270816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.274226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.274474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.274496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.278101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.278357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.278376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.282329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.282567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.282585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.286945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.287208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.287227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.290823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.291072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.291090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.294778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.295024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.295044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.298659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.298907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.298925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.302674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.302907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.302926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.306686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.306936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.306954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.310611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.310878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.310897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.314296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.314544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.314563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.317947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.318198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.318216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.321577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.321830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.321849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.325223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.325487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.325505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.328820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.329082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.329100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.332787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.333019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.333039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.337534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.337785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.337804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.342038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.342295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.342314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.346053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.346313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.346331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.350014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.454 [2024-05-15 08:40:16.350287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.454 [2024-05-15 08:40:16.350305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.454 [2024-05-15 08:40:16.353947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.354204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.354223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.357992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.358262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.358281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.361948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.362211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.362230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.365872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.366130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.366148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.369709] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.369970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.369988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.373603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.373863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.373881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.377574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.377830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.377852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.382096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.382365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.382384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.386996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.387263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.387281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.391383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.391631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.391649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.395447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.395683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.395701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.399508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.399752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.399770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.403506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.403750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.403769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.407348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.407616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.407634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.411486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.411760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.411778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.416011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.416271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.416290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.420592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.420837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.420855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.424759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.425006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.425025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.428830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.429069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.429088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.432886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.433134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.433152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.436959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.437205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.437223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.440818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.441072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.441090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.444674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.444929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.444947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.448557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.448814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.448833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.452411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.452669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.452688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.456367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.455 [2024-05-15 08:40:16.456606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.455 [2024-05-15 08:40:16.456624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.455 [2024-05-15 08:40:16.460556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.456 [2024-05-15 08:40:16.460805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.456 [2024-05-15 08:40:16.460824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.456 [2024-05-15 08:40:16.465617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.456 [2024-05-15 08:40:16.465864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.456 [2024-05-15 08:40:16.465882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.456 [2024-05-15 08:40:16.469761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.456 [2024-05-15 08:40:16.470003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.456 [2024-05-15 08:40:16.470021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.456 [2024-05-15 08:40:16.474037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.456 [2024-05-15 08:40:16.474298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.456 [2024-05-15 08:40:16.474317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.478139] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.478394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.478413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.482161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.482416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.482434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.486117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.486373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.486396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.490510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.490759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.490781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.495043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.495291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.495313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.499964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.500232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.500252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.505000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.505261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.505281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.509910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.510192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.510211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.514680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.514942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.514961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.518971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.519243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.519262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.522794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.523048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.523067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.526649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.526898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.526917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.530474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.530721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.530739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.534297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.534553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.534571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.538339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.538584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.538602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.542262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.542535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.542553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.546423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.546663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.546682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.551160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.551433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.551452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.555876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.556122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.556141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.560667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.560909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.560928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.565577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.565838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.565857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.570309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.570559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.570578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.575030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.575291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.575311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.579866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.580084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.580103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.584407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.584658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.716 [2024-05-15 08:40:16.584678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.716 [2024-05-15 08:40:16.589295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.716 [2024-05-15 08:40:16.589532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.589551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.594070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.594314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.594333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.599172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.599412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.599431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.604354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.604595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.604616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.608572] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.608794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.608813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.612877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.613105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.613123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.617661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.617919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.617938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.623054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.623297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.623316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.627496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.627728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.627746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.632503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.632722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.632740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.637335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.637560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.637578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.642051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.642291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.642311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.646151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.646395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.646414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.650026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.650275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.650294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.654014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.654259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.654278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.657948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.658171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.658190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.661679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.661899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.661918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.665650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.665866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.665885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.669569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.669790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.669809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.673538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.673764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.673782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.677511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.677737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.677756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.681650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.681873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.681891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.685539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.685766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.685784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.689481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.689715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.689734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.693420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.693643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.693661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.697884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.698174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.698192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.717 [2024-05-15 08:40:16.703295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.717 [2024-05-15 08:40:16.703642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.717 [2024-05-15 08:40:16.703661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.718 [2024-05-15 08:40:16.709417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.718 [2024-05-15 08:40:16.709591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.718 [2024-05-15 08:40:16.709610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.718 [2024-05-15 08:40:16.715212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.718 [2024-05-15 08:40:16.715322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.718 [2024-05-15 08:40:16.715340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.718 [2024-05-15 08:40:16.721539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.718 [2024-05-15 08:40:16.721703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.718 [2024-05-15 08:40:16.721725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.718 [2024-05-15 08:40:16.727755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.718 [2024-05-15 08:40:16.727885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.718 [2024-05-15 08:40:16.727903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.718 [2024-05-15 08:40:16.734097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.718 [2024-05-15 08:40:16.734260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.718 [2024-05-15 08:40:16.734279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.977 [2024-05-15 08:40:16.740062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.977 [2024-05-15 08:40:16.740227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.977 [2024-05-15 08:40:16.740246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.977 [2024-05-15 08:40:16.746744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.977 [2024-05-15 08:40:16.746933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.977 [2024-05-15 08:40:16.746952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.977 [2024-05-15 08:40:16.753215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.977 [2024-05-15 08:40:16.753283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.977 [2024-05-15 08:40:16.753301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.977 [2024-05-15 08:40:16.759200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.977 [2024-05-15 08:40:16.759347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.977 [2024-05-15 08:40:16.759365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.977 [2024-05-15 08:40:16.766433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.977 [2024-05-15 08:40:16.766567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.977 [2024-05-15 08:40:16.766587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.977 [2024-05-15 08:40:16.772533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.977 [2024-05-15 08:40:16.772680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.977 [2024-05-15 08:40:16.772699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.977 [2024-05-15 08:40:16.778376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.977 [2024-05-15 08:40:16.778556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.977 [2024-05-15 08:40:16.778576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.977 [2024-05-15 08:40:16.784818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.977 [2024-05-15 08:40:16.784930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.977 [2024-05-15 08:40:16.784948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.977 [2024-05-15 08:40:16.790501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.977 [2024-05-15 08:40:16.790613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.790635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.794797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.794930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.794949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.799262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.799365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.799382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.803606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.803711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.803729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.807934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.808026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.808043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.812212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.812329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.812347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.816793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.816906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.816924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.821333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.821447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.821464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.825693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.825859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.825877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.830779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.830941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.830960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.836265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.836381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.836399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.841563] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.841713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.841732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.846651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.846793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.846811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.851825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.852003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.852021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.857203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.857339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.857357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.862402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.862546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.862568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.867478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.867625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.867643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.872556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.872709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.872729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.877634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.877786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.877804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.882985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.883131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.883151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.888173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.888332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.888351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.893876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.893952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.893969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.899005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.899178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.899196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.904229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.904364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.904382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.909531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.909694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.909713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.914710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.914907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.914926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.978 [2024-05-15 08:40:16.919833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.978 [2024-05-15 08:40:16.919985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.978 [2024-05-15 08:40:16.920004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.979 [2024-05-15 08:40:16.925056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.979 [2024-05-15 08:40:16.925190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.979 [2024-05-15 08:40:16.925209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.979 [2024-05-15 08:40:16.930527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.979 [2024-05-15 08:40:16.930717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.979 [2024-05-15 08:40:16.930735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.979 [2024-05-15 08:40:16.936415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.979 [2024-05-15 08:40:16.936567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.979 [2024-05-15 08:40:16.936586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.979 [2024-05-15 08:40:16.942244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.979 [2024-05-15 08:40:16.942362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.979 [2024-05-15 08:40:16.942380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.979 [2024-05-15 08:40:16.946600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.979 [2024-05-15 08:40:16.946708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.979 [2024-05-15 08:40:16.946725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.979 [2024-05-15 08:40:16.951631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.979 [2024-05-15 08:40:16.951728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.979 [2024-05-15 08:40:16.951746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.979 [2024-05-15 08:40:16.956217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.979 [2024-05-15 08:40:16.956291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.979 [2024-05-15 08:40:16.956308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.979 [2024-05-15 08:40:16.960061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.979 [2024-05-15 08:40:16.960151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.979 [2024-05-15 08:40:16.960175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.979 [2024-05-15 08:40:16.963962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.979 [2024-05-15 08:40:16.964041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.979 [2024-05-15 08:40:16.964059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.979 [2024-05-15 08:40:16.967788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.979 [2024-05-15 08:40:16.967860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.979 [2024-05-15 08:40:16.967877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.979 [2024-05-15 08:40:16.971678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.979 [2024-05-15 08:40:16.971793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.979 [2024-05-15 08:40:16.971814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.979 [2024-05-15 08:40:16.975880] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.979 [2024-05-15 08:40:16.975947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.979 [2024-05-15 08:40:16.975965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.979 [2024-05-15 08:40:16.980100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.979 [2024-05-15 08:40:16.980183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.979 [2024-05-15 08:40:16.980201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.979 [2024-05-15 08:40:16.984058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.979 [2024-05-15 08:40:16.984128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.979 [2024-05-15 08:40:16.984146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.979 [2024-05-15 08:40:16.987946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.979 [2024-05-15 08:40:16.988027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.979 [2024-05-15 08:40:16.988048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.979 [2024-05-15 08:40:16.991923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.979 [2024-05-15 08:40:16.991991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.979 [2024-05-15 08:40:16.992009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.979 [2024-05-15 08:40:16.995640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:29.979 [2024-05-15 08:40:16.995692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.979 [2024-05-15 08:40:16.995710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.979 [2024-05-15 08:40:16.999682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.239 [2024-05-15 08:40:16.999841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.239 [2024-05-15 08:40:16.999860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.239 [2024-05-15 08:40:17.003979] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.239 [2024-05-15 08:40:17.004034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.239 [2024-05-15 08:40:17.004052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.239 [2024-05-15 08:40:17.008533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.239 [2024-05-15 08:40:17.008601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.239 [2024-05-15 08:40:17.008619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.239 [2024-05-15 08:40:17.012525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.239 [2024-05-15 08:40:17.012576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.239 [2024-05-15 08:40:17.012594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.239 [2024-05-15 08:40:17.016431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.239 [2024-05-15 08:40:17.016494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.016511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.020304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.020373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.020391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.024254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.024341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.024359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.028116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.028195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.028213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.032197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.032290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.032308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.036397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.036465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.036482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.040275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.040344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.040361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.044156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.044244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.044262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.048150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.048245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.048263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.052271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.052350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.052368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.056206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.056311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.056329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.060160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.060263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.060281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.064249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.064324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.064343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.068186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.068239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.068258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.071963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.072024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.072043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.075772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.075832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.075851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.079575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.079630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.079647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.083334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.083390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.083408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.087067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.087135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.087153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.090897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.090972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.090994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.094778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.094863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.094880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.098667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.098737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.098754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.102572] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.102625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.102642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.107124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.107219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.107238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.111074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.111130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.111148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.115027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.115124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.240 [2024-05-15 08:40:17.115141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.240 [2024-05-15 08:40:17.118961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.240 [2024-05-15 08:40:17.119014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.119032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.122899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.122951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.122968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.126815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.126870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.126887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.130769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.130820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.130837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.135363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.135445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.135462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.139726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.139831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.139849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.144836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.144981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.145000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.150776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.150865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.150883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.155363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.155505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.155523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.159682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.159777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.159794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.163889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.163978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.163995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.167841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.167911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.167929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.171916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.172008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.172026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.176793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.176947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.176964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.181781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.181884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.181902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.186000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.186084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.186102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.190195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.190304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.190322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.194354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.194475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.194493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.198511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.198587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.198604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.202593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.202705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.202730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.206722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.206781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.206799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.210751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.210883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.210902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.215645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.215801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.215819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.220812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.220920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.220937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.225964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.226074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.226091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.231270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.231420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.241 [2024-05-15 08:40:17.231439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.241 [2024-05-15 08:40:17.236440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.241 [2024-05-15 08:40:17.236632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.242 [2024-05-15 08:40:17.236651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.242 [2024-05-15 08:40:17.241526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.242 [2024-05-15 08:40:17.241712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.242 [2024-05-15 08:40:17.241730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.242 [2024-05-15 08:40:17.246739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.242 [2024-05-15 08:40:17.246918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.242 [2024-05-15 08:40:17.246937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.242 [2024-05-15 08:40:17.251836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.242 [2024-05-15 08:40:17.252000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.242 [2024-05-15 08:40:17.252018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.242 [2024-05-15 08:40:17.257028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.242 [2024-05-15 08:40:17.257226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.242 [2024-05-15 08:40:17.257245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.502 [2024-05-15 08:40:17.262228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.502 [2024-05-15 08:40:17.262424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.502 [2024-05-15 08:40:17.262443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.502 [2024-05-15 08:40:17.267590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.502 [2024-05-15 08:40:17.267690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.502 [2024-05-15 08:40:17.267708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.502 [2024-05-15 08:40:17.272780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.502 [2024-05-15 08:40:17.272892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.502 [2024-05-15 08:40:17.272913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.502 [2024-05-15 08:40:17.278033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.502 [2024-05-15 08:40:17.278203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.502 [2024-05-15 08:40:17.278221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.283495] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.283671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.283689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.287956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.288078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.288096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.292001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.292097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.292115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.296842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.296966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.296983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.301122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.301180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.301213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.304948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.305013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.305031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.308789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.308868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.308886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.312616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.312677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.312695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.316411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.316463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.316480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.320342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.320411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.320428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.324618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.324732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.324755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.329273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.329344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.329362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.333472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.333547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.333564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.338076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.338145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.338163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.342655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.342783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.342802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.346718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.346863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.346882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.350627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.350756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.350774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.354610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.354700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.354719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.358591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.358656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.358673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.362538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.362608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.362628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.366457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.366503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.366520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.370335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.370401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.370419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.374205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.374270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.374289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.503 [2024-05-15 08:40:17.378066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.503 [2024-05-15 08:40:17.378143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.503 [2024-05-15 08:40:17.378161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.382118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.382190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.382207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.386686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.386746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.386764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.390729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.390803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.390820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.394577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.394627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.394644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.398408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.398462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.398479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.402255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.402354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.402372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.406179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.406230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.406247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.410227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.410293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.410310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.414041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.414099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.414117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.417864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.417937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.417954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.421707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.421758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.421775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.425482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.425563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.425580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.429414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.429482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.429500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.433918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.433971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.433988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.438063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.438133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.438151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.442091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.442191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.442209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.446036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.446093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.446111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.450034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.450128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.450146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.454049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.454100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.454118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.457907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.457976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.457994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.461798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.461910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.461931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.466150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.466222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.466243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.470731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.470844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.470862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.475663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.475721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.475738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.479586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.479681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.504 [2024-05-15 08:40:17.479698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.504 [2024-05-15 08:40:17.483621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.504 [2024-05-15 08:40:17.483723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.505 [2024-05-15 08:40:17.483740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.505 [2024-05-15 08:40:17.487595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.505 [2024-05-15 08:40:17.487658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.505 [2024-05-15 08:40:17.487675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.505 [2024-05-15 08:40:17.491426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.505 [2024-05-15 08:40:17.491486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.505 [2024-05-15 08:40:17.491503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.505 [2024-05-15 08:40:17.495311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.505 [2024-05-15 08:40:17.495433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.505 [2024-05-15 08:40:17.495451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.505 [2024-05-15 08:40:17.499641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.505 [2024-05-15 08:40:17.499694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.505 [2024-05-15 08:40:17.499711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.505 [2024-05-15 08:40:17.504601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.505 [2024-05-15 08:40:17.504683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.505 [2024-05-15 08:40:17.504701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.505 [2024-05-15 08:40:17.508740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.505 [2024-05-15 08:40:17.508835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.505 [2024-05-15 08:40:17.508852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.505 [2024-05-15 08:40:17.512796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.505 [2024-05-15 08:40:17.512893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.505 [2024-05-15 08:40:17.512911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.505 [2024-05-15 08:40:17.516828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.505 [2024-05-15 08:40:17.516893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.505 [2024-05-15 08:40:17.516910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.505 [2024-05-15 08:40:17.520850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.505 [2024-05-15 08:40:17.520906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.505 [2024-05-15 08:40:17.520923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.524758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.524822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.524840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.528646] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.528701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.528718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.532531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.532585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.532603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.536496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.536544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.536562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.540568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.540626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.540643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.545212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.545263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.545281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.549556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.549626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.549643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.553932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.554112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.554131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.559740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.559818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.559836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.563809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.563897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.563914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.567857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.567957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.567975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.571825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.571899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.571917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.575683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.575747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.575769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.579675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.579730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.579748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.583536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.583617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.583634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.587529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.587580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.587598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.591454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.591505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.591523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.595274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.595342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.595360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.599230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.599287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.599305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.603198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.603283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.603301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.607108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.607163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.607186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.610890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.611004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.611021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.614770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.766 [2024-05-15 08:40:17.614862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.766 [2024-05-15 08:40:17.614879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.766 [2024-05-15 08:40:17.618825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.618892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.618909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.622617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.622671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.622689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.626447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.626500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.626517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.630098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.630156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.630179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.633720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.633773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.633790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.637353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.637424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.637451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.641299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.641380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.641397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.645664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.645728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.645746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.650276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.650346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.650364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.654229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.654313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.654331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.658163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.658251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.658268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.662155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.662271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.662289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.666155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.666238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.666255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.670080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.670134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.670152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.674217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.674308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.674325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.678116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.678174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.678211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.682068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.682158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.682180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.686161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.686246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.686264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.690100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.690182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.690217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.693993] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.694049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.694067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.697860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.697923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.697940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.701816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.701880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.701897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.705812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.705863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.705880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.709727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.709817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.709834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.713681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.713747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.713764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.717603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.717658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.717674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.767 [2024-05-15 08:40:17.721879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.767 [2024-05-15 08:40:17.721936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.767 [2024-05-15 08:40:17.721953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.768 [2024-05-15 08:40:17.725783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.768 [2024-05-15 08:40:17.725862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.768 [2024-05-15 08:40:17.725879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.768 [2024-05-15 08:40:17.729591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.768 [2024-05-15 08:40:17.729647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.768 [2024-05-15 08:40:17.729664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.768 [2024-05-15 08:40:17.733398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.768 [2024-05-15 08:40:17.733466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.768 [2024-05-15 08:40:17.733484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.768 [2024-05-15 08:40:17.737264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.768 [2024-05-15 08:40:17.737337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.768 [2024-05-15 08:40:17.737354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.768 [2024-05-15 08:40:17.741131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.768 [2024-05-15 08:40:17.741241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.768 [2024-05-15 08:40:17.741258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.768 [2024-05-15 08:40:17.745455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.768 [2024-05-15 08:40:17.745509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.768 [2024-05-15 08:40:17.745527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.768 [2024-05-15 08:40:17.750277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.768 [2024-05-15 08:40:17.750350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.768 [2024-05-15 08:40:17.750367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.768 [2024-05-15 08:40:17.754454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.768 [2024-05-15 08:40:17.754524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.768 [2024-05-15 08:40:17.754541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.768 [2024-05-15 08:40:17.758458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.768 [2024-05-15 08:40:17.758540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.768 [2024-05-15 08:40:17.758557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.768 [2024-05-15 08:40:17.762475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.768 [2024-05-15 08:40:17.762557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.768 [2024-05-15 08:40:17.762574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.768 [2024-05-15 08:40:17.766577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.768 [2024-05-15 08:40:17.766663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.768 [2024-05-15 08:40:17.766680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.768 [2024-05-15 08:40:17.770576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.768 [2024-05-15 08:40:17.770650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.768 [2024-05-15 08:40:17.770667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.768 [2024-05-15 08:40:17.774490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.768 [2024-05-15 08:40:17.774597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.768 [2024-05-15 08:40:17.774615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.768 [2024-05-15 08:40:17.778641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.768 [2024-05-15 08:40:17.778707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.768 [2024-05-15 08:40:17.778725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.768 [2024-05-15 08:40:17.783131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:30.768 [2024-05-15 08:40:17.783203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.768 [2024-05-15 08:40:17.783225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.028 [2024-05-15 08:40:17.787991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.028 [2024-05-15 08:40:17.788065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.028 [2024-05-15 08:40:17.788083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.028 [2024-05-15 08:40:17.792624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.028 [2024-05-15 08:40:17.792691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.028 [2024-05-15 08:40:17.792708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.028 [2024-05-15 08:40:17.797660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.028 [2024-05-15 08:40:17.797715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.028 [2024-05-15 08:40:17.797733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.028 [2024-05-15 08:40:17.803134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.028 [2024-05-15 08:40:17.803226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.028 [2024-05-15 08:40:17.803244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.028 [2024-05-15 08:40:17.807715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.028 [2024-05-15 08:40:17.807785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.028 [2024-05-15 08:40:17.807803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.028 [2024-05-15 08:40:17.811784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.028 [2024-05-15 08:40:17.811898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.028 [2024-05-15 08:40:17.811916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.028 [2024-05-15 08:40:17.815673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.028 [2024-05-15 08:40:17.815747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.028 [2024-05-15 08:40:17.815764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.028 [2024-05-15 08:40:17.819705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.028 [2024-05-15 08:40:17.819758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.028 [2024-05-15 08:40:17.819775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.028 [2024-05-15 08:40:17.823796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.028 [2024-05-15 08:40:17.823871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.028 [2024-05-15 08:40:17.823887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.028 [2024-05-15 08:40:17.827824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.028 [2024-05-15 08:40:17.827899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.028 [2024-05-15 08:40:17.827916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.028 [2024-05-15 08:40:17.831816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.028 [2024-05-15 08:40:17.831893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.028 [2024-05-15 08:40:17.831911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.028 [2024-05-15 08:40:17.835792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.029 [2024-05-15 08:40:17.835917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.029 [2024-05-15 08:40:17.835937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.029 [2024-05-15 08:40:17.839812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.029 [2024-05-15 08:40:17.839886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.029 [2024-05-15 08:40:17.839903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.029 [2024-05-15 08:40:17.844446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.029 [2024-05-15 08:40:17.844516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.029 [2024-05-15 08:40:17.844534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.029 [2024-05-15 08:40:17.849212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.029 [2024-05-15 08:40:17.849346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.029 [2024-05-15 08:40:17.849365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.029 [2024-05-15 08:40:17.853705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.029 [2024-05-15 08:40:17.853778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.029 [2024-05-15 08:40:17.853795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.029 [2024-05-15 08:40:17.858157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.029 [2024-05-15 08:40:17.858238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.029 [2024-05-15 08:40:17.858256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.029 [2024-05-15 08:40:17.863122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.029 [2024-05-15 08:40:17.863209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.029 [2024-05-15 08:40:17.863227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.029 [2024-05-15 08:40:17.867633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.029 [2024-05-15 08:40:17.867689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.029 [2024-05-15 08:40:17.867706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.029 [2024-05-15 08:40:17.871610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.029 [2024-05-15 08:40:17.871677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.029 [2024-05-15 08:40:17.871695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.029 [2024-05-15 08:40:17.875340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.029 [2024-05-15 08:40:17.875456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.029 [2024-05-15 08:40:17.875473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.029 [2024-05-15 08:40:17.879452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.029 [2024-05-15 08:40:17.879584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.029 [2024-05-15 08:40:17.879602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.029 [2024-05-15 08:40:17.884861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.029 [2024-05-15 08:40:17.884969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.029 [2024-05-15 08:40:17.884987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.029 [2024-05-15 08:40:17.889617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.029 [2024-05-15 08:40:17.889738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.029 [2024-05-15 08:40:17.889756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.029 [2024-05-15 08:40:17.893756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.029 [2024-05-15 08:40:17.893862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.029 [2024-05-15 08:40:17.893880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.029 [2024-05-15 08:40:17.897812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.029 [2024-05-15 08:40:17.897908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.029 [2024-05-15 08:40:17.897929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.029 [2024-05-15 08:40:17.901815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.029 [2024-05-15 08:40:17.901906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.029 [2024-05-15 08:40:17.901924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.029 [2024-05-15 08:40:17.905960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.029 [2024-05-15 08:40:17.906086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.030 [2024-05-15 08:40:17.906104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.030 [2024-05-15 08:40:17.910027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.030 [2024-05-15 08:40:17.910139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.030 [2024-05-15 08:40:17.910157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.030 [2024-05-15 08:40:17.914058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.030 [2024-05-15 08:40:17.914135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.030 [2024-05-15 08:40:17.914152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.030 [2024-05-15 08:40:17.918040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.030 [2024-05-15 08:40:17.918154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.030 [2024-05-15 08:40:17.918178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.030 [2024-05-15 08:40:17.922037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.030 [2024-05-15 08:40:17.922159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.030 [2024-05-15 08:40:17.922184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.030 [2024-05-15 08:40:17.926083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.030 [2024-05-15 08:40:17.926197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.030 [2024-05-15 08:40:17.926215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.030 [2024-05-15 08:40:17.930207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.030 [2024-05-15 08:40:17.930301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.030 [2024-05-15 08:40:17.930320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.030 [2024-05-15 08:40:17.934397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.030 [2024-05-15 08:40:17.934505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.030 [2024-05-15 08:40:17.934523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.030 [2024-05-15 08:40:17.938363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.030 [2024-05-15 08:40:17.938465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.030 [2024-05-15 08:40:17.938482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.030 [2024-05-15 08:40:17.942524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.030 [2024-05-15 08:40:17.942626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.030 [2024-05-15 08:40:17.942643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.030 [2024-05-15 08:40:17.946374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.030 [2024-05-15 08:40:17.946510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.030 [2024-05-15 08:40:17.946528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.030 [2024-05-15 08:40:17.950575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.030 [2024-05-15 08:40:17.950762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.030 [2024-05-15 08:40:17.950781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.030 [2024-05-15 08:40:17.955668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.030 [2024-05-15 08:40:17.955837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.030 [2024-05-15 08:40:17.955855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.030 [2024-05-15 08:40:17.960768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.030 [2024-05-15 08:40:17.960920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.030 [2024-05-15 08:40:17.960938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.030 [2024-05-15 08:40:17.966020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.030 [2024-05-15 08:40:17.966140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.030 [2024-05-15 08:40:17.966159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.030 [2024-05-15 08:40:17.971594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.030 [2024-05-15 08:40:17.971702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.030 [2024-05-15 08:40:17.971719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.030 [2024-05-15 08:40:17.978115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.030 [2024-05-15 08:40:17.978207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.030 [2024-05-15 08:40:17.978225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.030 [2024-05-15 08:40:17.984326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.030 [2024-05-15 08:40:17.984457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.030 [2024-05-15 08:40:17.984476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.031 [2024-05-15 08:40:17.991242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.031 [2024-05-15 08:40:17.991329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.031 [2024-05-15 08:40:17.991347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.031 [2024-05-15 08:40:17.997777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.031 [2024-05-15 08:40:17.997951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.031 [2024-05-15 08:40:17.997969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.031 [2024-05-15 08:40:18.004708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.031 [2024-05-15 08:40:18.004812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.031 [2024-05-15 08:40:18.004829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.031 [2024-05-15 08:40:18.011380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.031 [2024-05-15 08:40:18.011498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.031 [2024-05-15 08:40:18.011515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.031 [2024-05-15 08:40:18.017867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.031 [2024-05-15 08:40:18.017973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.031 [2024-05-15 08:40:18.017991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.031 [2024-05-15 08:40:18.024689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.031 [2024-05-15 08:40:18.024796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.031 [2024-05-15 08:40:18.024813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.031 [2024-05-15 08:40:18.031135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.031 [2024-05-15 08:40:18.031251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.031 [2024-05-15 08:40:18.031268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.031 [2024-05-15 08:40:18.038688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.031 [2024-05-15 08:40:18.038875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.031 [2024-05-15 08:40:18.038894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.031 [2024-05-15 08:40:18.044987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.031 [2024-05-15 08:40:18.045080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.031 [2024-05-15 08:40:18.045097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.290 [2024-05-15 08:40:18.051001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.290 [2024-05-15 08:40:18.051144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.290 [2024-05-15 08:40:18.051169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.290 [2024-05-15 08:40:18.056881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8dbe00) with pdu=0x2000190fef90 00:28:31.290 [2024-05-15 08:40:18.056998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.290 [2024-05-15 08:40:18.057017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.290 00:28:31.290 Latency(us) 00:28:31.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.290 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:31.290 nvme0n1 : 2.00 7032.74 879.09 0.00 0.00 2270.29 1674.02 7237.45 00:28:31.290 =================================================================================================================== 00:28:31.290 Total : 7032.74 879.09 0.00 0.00 2270.29 1674.02 7237.45 00:28:31.290 0 00:28:31.290 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:31.290 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:31.290 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:31.290 | .driver_specific 00:28:31.290 | .nvme_error 00:28:31.290 | .status_code 00:28:31.290 | .command_transient_transport_error' 00:28:31.290 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:31.290 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 454 > 0 )) 00:28:31.290 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 442779 00:28:31.290 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 442779 ']' 00:28:31.290 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 442779 00:28:31.290 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:28:31.290 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:31.290 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 442779 00:28:31.290 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:31.290 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:31.290 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 442779' 00:28:31.290 killing process with pid 442779 00:28:31.290 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 442779 00:28:31.290 Received shutdown signal, test time was about 2.000000 seconds 00:28:31.290 00:28:31.290 Latency(us) 00:28:31.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.290 =================================================================================================================== 00:28:31.290 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:31.290 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 442779 00:28:31.549 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 440654 00:28:31.549 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 440654 ']' 00:28:31.549 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 440654 00:28:31.549 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:28:31.549 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:31.549 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 440654 00:28:31.549 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:31.549 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:31.549 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 440654' 00:28:31.549 killing process with pid 440654 00:28:31.549 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 440654 00:28:31.549 [2024-05-15 08:40:18.557764] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:31.549 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 440654 00:28:31.808 00:28:31.808 real 0m16.886s 00:28:31.808 user 0m32.093s 00:28:31.808 sys 0m4.666s 00:28:31.808 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:31.808 08:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.808 ************************************ 00:28:31.808 END TEST nvmf_digest_error 00:28:31.808 ************************************ 00:28:31.808 08:40:18 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:31.808 08:40:18 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:31.808 08:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:31.808 08:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:31.808 08:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:31.808 08:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:31.808 08:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:31.808 08:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:31.808 rmmod nvme_tcp 00:28:31.808 rmmod nvme_fabrics 00:28:32.067 rmmod nvme_keyring 00:28:32.067 08:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:32.067 08:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:32.067 08:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:32.067 08:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 440654 ']' 00:28:32.067 08:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 440654 00:28:32.067 08:40:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 440654 ']' 00:28:32.067 08:40:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 440654 00:28:32.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (440654) - No such process 00:28:32.067 08:40:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 440654 is not found' 00:28:32.067 Process with pid 440654 is not found 00:28:32.067 08:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:32.067 08:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:32.067 08:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:32.067 08:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:32.067 08:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:32.067 08:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.067 08:40:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:32.067 08:40:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.971 08:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:33.971 00:28:33.971 real 0m41.346s 00:28:33.971 user 1m6.032s 00:28:33.971 sys 0m13.029s 00:28:33.971 08:40:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:33.971 08:40:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:33.971 ************************************ 00:28:33.971 END TEST nvmf_digest 00:28:33.971 ************************************ 00:28:33.971 08:40:20 nvmf_tcp -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:28:33.971 08:40:20 nvmf_tcp -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:28:33.971 08:40:20 nvmf_tcp -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:28:33.971 08:40:20 nvmf_tcp -- nvmf/nvmf.sh@120 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:33.971 08:40:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:33.971 08:40:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:33.971 08:40:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:33.971 ************************************ 00:28:33.971 START TEST nvmf_bdevperf 00:28:33.971 ************************************ 00:28:33.971 08:40:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:34.229 * Looking for test storage... 00:28:34.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:34.229 08:40:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.229 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:34.229 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.229 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.229 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.229 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.229 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.229 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.229 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.229 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.229 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.229 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.229 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:34.229 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:34.229 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.229 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:34.230 08:40:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:39.501 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:39.501 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:39.501 Found net devices under 0000:86:00.0: cvl_0_0 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:39.501 Found net devices under 0000:86:00.1: cvl_0_1 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:39.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:28:39.501 00:28:39.501 --- 10.0.0.2 ping statistics --- 00:28:39.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.501 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:28:39.501 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:28:39.501 00:28:39.501 --- 10.0.0.1 ping statistics --- 00:28:39.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.502 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:28:39.502 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.502 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:39.502 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:39.502 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.502 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:39.502 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:39.502 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.502 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:39.502 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:39.760 08:40:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:39.760 08:40:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:39.760 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:39.760 08:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:39.760 08:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.760 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=446890 00:28:39.760 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 446890 00:28:39.760 08:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:39.760 08:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 446890 ']' 00:28:39.760 08:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.760 08:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:39.760 08:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.760 08:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:39.760 08:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.760 [2024-05-15 08:40:26.594495] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:28:39.760 [2024-05-15 08:40:26.594539] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.760 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.760 [2024-05-15 08:40:26.651551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:39.760 [2024-05-15 08:40:26.731464] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.760 [2024-05-15 08:40:26.731501] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.760 [2024-05-15 08:40:26.731510] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.760 [2024-05-15 08:40:26.731516] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.760 [2024-05-15 08:40:26.731521] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.760 [2024-05-15 08:40:26.731616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.760 [2024-05-15 08:40:26.731714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:39.760 [2024-05-15 08:40:26.731715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.694 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:40.694 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:28:40.694 08:40:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:40.694 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:40.694 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.694 08:40:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.694 08:40:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:40.694 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.694 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.694 [2024-05-15 08:40:27.444466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.694 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.695 Malloc0 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.695 [2024-05-15 08:40:27.505670] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:40.695 [2024-05-15 08:40:27.505885] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:40.695 { 00:28:40.695 "params": { 00:28:40.695 "name": "Nvme$subsystem", 00:28:40.695 "trtype": "$TEST_TRANSPORT", 00:28:40.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.695 "adrfam": "ipv4", 00:28:40.695 "trsvcid": "$NVMF_PORT", 00:28:40.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.695 "hdgst": ${hdgst:-false}, 00:28:40.695 "ddgst": ${ddgst:-false} 00:28:40.695 }, 00:28:40.695 "method": "bdev_nvme_attach_controller" 00:28:40.695 } 00:28:40.695 EOF 00:28:40.695 )") 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:40.695 08:40:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:40.695 "params": { 00:28:40.695 "name": "Nvme1", 00:28:40.695 "trtype": "tcp", 00:28:40.695 "traddr": "10.0.0.2", 00:28:40.695 "adrfam": "ipv4", 00:28:40.695 "trsvcid": "4420", 00:28:40.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:40.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:40.695 "hdgst": false, 00:28:40.695 "ddgst": false 00:28:40.695 }, 00:28:40.695 "method": "bdev_nvme_attach_controller" 00:28:40.695 }' 00:28:40.695 [2024-05-15 08:40:27.556183] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:28:40.695 [2024-05-15 08:40:27.556228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid447026 ] 00:28:40.695 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.695 [2024-05-15 08:40:27.609013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.695 [2024-05-15 08:40:27.682280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.953 Running I/O for 1 seconds... 00:28:41.888 00:28:41.888 Latency(us) 00:28:41.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.888 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:41.888 Verification LBA range: start 0x0 length 0x4000 00:28:41.888 Nvme1n1 : 1.01 10915.47 42.64 0.00 0.00 11679.91 2336.50 14930.81 00:28:41.888 =================================================================================================================== 00:28:41.888 Total : 10915.47 42.64 0.00 0.00 11679.91 2336.50 14930.81 00:28:42.146 08:40:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=447265 00:28:42.146 08:40:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:42.146 08:40:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:42.146 08:40:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:42.146 08:40:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:42.146 08:40:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:42.146 08:40:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:42.146 08:40:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:42.146 { 00:28:42.146 "params": { 00:28:42.146 "name": "Nvme$subsystem", 00:28:42.146 "trtype": "$TEST_TRANSPORT", 00:28:42.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.146 "adrfam": "ipv4", 00:28:42.146 "trsvcid": "$NVMF_PORT", 00:28:42.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.146 "hdgst": ${hdgst:-false}, 00:28:42.146 "ddgst": ${ddgst:-false} 00:28:42.146 }, 00:28:42.146 "method": "bdev_nvme_attach_controller" 00:28:42.146 } 00:28:42.146 EOF 00:28:42.146 )") 00:28:42.146 08:40:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:42.146 08:40:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:42.146 08:40:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:42.146 08:40:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:42.146 "params": { 00:28:42.146 "name": "Nvme1", 00:28:42.146 "trtype": "tcp", 00:28:42.146 "traddr": "10.0.0.2", 00:28:42.146 "adrfam": "ipv4", 00:28:42.146 "trsvcid": "4420", 00:28:42.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:42.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:42.146 "hdgst": false, 00:28:42.146 "ddgst": false 00:28:42.146 }, 00:28:42.146 "method": "bdev_nvme_attach_controller" 00:28:42.146 }' 00:28:42.146 [2024-05-15 08:40:29.101940] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:28:42.146 [2024-05-15 08:40:29.101991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid447265 ] 00:28:42.146 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.146 [2024-05-15 08:40:29.156029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.404 [2024-05-15 08:40:29.226336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.404 Running I/O for 15 seconds... 00:28:45.691 08:40:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 446890 00:28:45.691 08:40:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:45.691 [2024-05-15 08:40:32.076057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.691 [2024-05-15 08:40:32.076099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.691 [2024-05-15 08:40:32.076117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.691 [2024-05-15 08:40:32.076125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.691 [2024-05-15 08:40:32.076134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.691 [2024-05-15 08:40:32.076142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.691 [2024-05-15 08:40:32.076151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.691 [2024-05-15 08:40:32.076158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.691 [2024-05-15 08:40:32.076173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.691 [2024-05-15 08:40:32.076184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.691 [2024-05-15 08:40:32.076193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.691 [2024-05-15 08:40:32.076200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.691 [2024-05-15 08:40:32.076208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.691 [2024-05-15 08:40:32.076215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.692 [2024-05-15 08:40:32.076838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.692 [2024-05-15 08:40:32.076847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.693 [2024-05-15 08:40:32.076854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.076862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.693 [2024-05-15 08:40:32.076869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.076877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.693 [2024-05-15 08:40:32.076884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.076893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.693 [2024-05-15 08:40:32.076900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.076909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.693 [2024-05-15 08:40:32.076915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.076924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.693 [2024-05-15 08:40:32.076930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.076938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.693 [2024-05-15 08:40:32.076945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.076953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.693 [2024-05-15 08:40:32.076960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.076968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.693 [2024-05-15 08:40:32.076975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.076983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.693 [2024-05-15 08:40:32.076990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.076998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.693 [2024-05-15 08:40:32.077004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.693 [2024-05-15 08:40:32.077020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.693 [2024-05-15 08:40:32.077035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.693 [2024-05-15 08:40:32.077054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.693 [2024-05-15 08:40:32.077068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.693 [2024-05-15 08:40:32.077085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.693 [2024-05-15 08:40:32.077557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.693 [2024-05-15 08:40:32.077565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.694 [2024-05-15 08:40:32.077572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.694 [2024-05-15 08:40:32.077586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.694 [2024-05-15 08:40:32.077600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.694 [2024-05-15 08:40:32.077616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.694 [2024-05-15 08:40:32.077631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.694 [2024-05-15 08:40:32.077645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.694 [2024-05-15 08:40:32.077660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.694 [2024-05-15 08:40:32.077674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.694 [2024-05-15 08:40:32.077689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.694 [2024-05-15 08:40:32.077704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.694 [2024-05-15 08:40:32.077718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.694 [2024-05-15 08:40:32.077733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.694 [2024-05-15 08:40:32.077747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.077761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.077776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.077792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.077807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.077821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.077836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.077851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.077866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.077880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.077895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.077909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.077923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.077938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.077953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.077967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.077983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.077991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.077997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.078006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.078013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.078021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.078028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.078035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.078044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.078052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.078058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.078066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.078072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.078080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.078086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.078094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.694 [2024-05-15 08:40:32.078101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.078109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.078116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.694 [2024-05-15 08:40:32.078124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.694 [2024-05-15 08:40:32.078130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.695 [2024-05-15 08:40:32.078138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.695 [2024-05-15 08:40:32.078145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.695 [2024-05-15 08:40:32.078152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.695 [2024-05-15 08:40:32.078161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.695 [2024-05-15 08:40:32.078174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.695 [2024-05-15 08:40:32.078181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.695 [2024-05-15 08:40:32.078189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.695 [2024-05-15 08:40:32.078195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.695 [2024-05-15 08:40:32.078203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.695 [2024-05-15 08:40:32.078210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.695 [2024-05-15 08:40:32.078217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850a50 is same with the state(5) to be set 00:28:45.695 [2024-05-15 08:40:32.078226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.695 [2024-05-15 08:40:32.078231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.695 [2024-05-15 08:40:32.078237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108664 len:8 PRP1 0x0 PRP2 0x0 00:28:45.695 [2024-05-15 08:40:32.078245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.695 [2024-05-15 08:40:32.078286] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1850a50 was disconnected and freed. reset controller. 00:28:45.695 [2024-05-15 08:40:32.078330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.695 [2024-05-15 08:40:32.078338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.695 [2024-05-15 08:40:32.078346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.695 [2024-05-15 08:40:32.078354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.695 [2024-05-15 08:40:32.078361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.695 [2024-05-15 08:40:32.078368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.695 [2024-05-15 08:40:32.078375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.695 [2024-05-15 08:40:32.078382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.695 [2024-05-15 08:40:32.078388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.695 [2024-05-15 08:40:32.081274] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.695 [2024-05-15 08:40:32.081302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.695 [2024-05-15 08:40:32.081772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-05-15 08:40:32.081949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-05-15 08:40:32.081959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.695 [2024-05-15 08:40:32.081966] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.695 [2024-05-15 08:40:32.082149] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.695 [2024-05-15 08:40:32.082336] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.695 [2024-05-15 08:40:32.082344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.695 [2024-05-15 08:40:32.082351] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.695 [2024-05-15 08:40:32.085227] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.695 [2024-05-15 08:40:32.094560] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.695 [2024-05-15 08:40:32.094855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-05-15 08:40:32.094993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-05-15 08:40:32.095003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.695 [2024-05-15 08:40:32.095011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.695 [2024-05-15 08:40:32.095197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.695 [2024-05-15 08:40:32.095377] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.695 [2024-05-15 08:40:32.095385] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.695 [2024-05-15 08:40:32.095392] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.695 [2024-05-15 08:40:32.098266] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.695 [2024-05-15 08:40:32.107540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.695 [2024-05-15 08:40:32.107850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-05-15 08:40:32.107954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-05-15 08:40:32.107964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.695 [2024-05-15 08:40:32.107971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.695 [2024-05-15 08:40:32.108146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.695 [2024-05-15 08:40:32.108327] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.695 [2024-05-15 08:40:32.108336] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.695 [2024-05-15 08:40:32.108342] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.695 [2024-05-15 08:40:32.111060] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.695 [2024-05-15 08:40:32.120492] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.695 [2024-05-15 08:40:32.120929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-05-15 08:40:32.121084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-05-15 08:40:32.121114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.695 [2024-05-15 08:40:32.121137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.695 [2024-05-15 08:40:32.121746] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.695 [2024-05-15 08:40:32.122136] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.695 [2024-05-15 08:40:32.122144] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.695 [2024-05-15 08:40:32.122150] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.695 [2024-05-15 08:40:32.124871] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.695 [2024-05-15 08:40:32.133459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.695 [2024-05-15 08:40:32.133814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-05-15 08:40:32.133914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-05-15 08:40:32.133924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.695 [2024-05-15 08:40:32.133931] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.695 [2024-05-15 08:40:32.134104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.695 [2024-05-15 08:40:32.134286] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.695 [2024-05-15 08:40:32.134294] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.695 [2024-05-15 08:40:32.134300] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.695 [2024-05-15 08:40:32.137017] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.695 [2024-05-15 08:40:32.146303] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.695 [2024-05-15 08:40:32.146721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-05-15 08:40:32.147006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-05-15 08:40:32.147038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.695 [2024-05-15 08:40:32.147059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.695 [2024-05-15 08:40:32.147407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.695 [2024-05-15 08:40:32.147582] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.695 [2024-05-15 08:40:32.147590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.695 [2024-05-15 08:40:32.147597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.695 [2024-05-15 08:40:32.150323] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.695 [2024-05-15 08:40:32.159289] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.695 [2024-05-15 08:40:32.159670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-05-15 08:40:32.159886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-05-15 08:40:32.159896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.696 [2024-05-15 08:40:32.159903] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.696 [2024-05-15 08:40:32.160077] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.696 [2024-05-15 08:40:32.160263] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.696 [2024-05-15 08:40:32.160272] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.696 [2024-05-15 08:40:32.160278] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.696 [2024-05-15 08:40:32.162998] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.696 [2024-05-15 08:40:32.172274] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.696 [2024-05-15 08:40:32.172645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-05-15 08:40:32.172855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-05-15 08:40:32.172865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.696 [2024-05-15 08:40:32.172873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.696 [2024-05-15 08:40:32.173046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.696 [2024-05-15 08:40:32.173227] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.696 [2024-05-15 08:40:32.173236] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.696 [2024-05-15 08:40:32.173242] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.696 [2024-05-15 08:40:32.175961] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.696 [2024-05-15 08:40:32.185245] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.696 [2024-05-15 08:40:32.185672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-05-15 08:40:32.185989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-05-15 08:40:32.186020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.696 [2024-05-15 08:40:32.186042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.696 [2024-05-15 08:40:32.186380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.696 [2024-05-15 08:40:32.186556] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.696 [2024-05-15 08:40:32.186564] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.696 [2024-05-15 08:40:32.186570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.696 [2024-05-15 08:40:32.189301] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.696 [2024-05-15 08:40:32.198093] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.696 [2024-05-15 08:40:32.198403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-05-15 08:40:32.198506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-05-15 08:40:32.198515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.696 [2024-05-15 08:40:32.198522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.696 [2024-05-15 08:40:32.198696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.696 [2024-05-15 08:40:32.198870] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.696 [2024-05-15 08:40:32.198878] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.696 [2024-05-15 08:40:32.198888] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.696 [2024-05-15 08:40:32.201608] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.696 [2024-05-15 08:40:32.211031] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.696 [2024-05-15 08:40:32.211451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-05-15 08:40:32.211654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-05-15 08:40:32.211686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.696 [2024-05-15 08:40:32.211708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.696 [2024-05-15 08:40:32.212108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.696 [2024-05-15 08:40:32.212293] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.696 [2024-05-15 08:40:32.212303] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.696 [2024-05-15 08:40:32.212309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.696 [2024-05-15 08:40:32.215027] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.696 [2024-05-15 08:40:32.224062] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.696 [2024-05-15 08:40:32.224436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-05-15 08:40:32.224607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-05-15 08:40:32.224618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.696 [2024-05-15 08:40:32.224625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.696 [2024-05-15 08:40:32.224799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.696 [2024-05-15 08:40:32.224973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.696 [2024-05-15 08:40:32.224981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.696 [2024-05-15 08:40:32.224987] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.696 [2024-05-15 08:40:32.227715] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.696 [2024-05-15 08:40:32.236989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.696 [2024-05-15 08:40:32.237339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-05-15 08:40:32.237493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-05-15 08:40:32.237504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.696 [2024-05-15 08:40:32.237511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.696 [2024-05-15 08:40:32.237685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.696 [2024-05-15 08:40:32.237859] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.696 [2024-05-15 08:40:32.237867] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.696 [2024-05-15 08:40:32.237876] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.696 [2024-05-15 08:40:32.240597] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.696 [2024-05-15 08:40:32.249871] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.696 [2024-05-15 08:40:32.250319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-05-15 08:40:32.250427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-05-15 08:40:32.250438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.697 [2024-05-15 08:40:32.250444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.697 [2024-05-15 08:40:32.250609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.697 [2024-05-15 08:40:32.250772] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.697 [2024-05-15 08:40:32.250779] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.697 [2024-05-15 08:40:32.250785] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.697 [2024-05-15 08:40:32.253495] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.697 [2024-05-15 08:40:32.262713] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.697 [2024-05-15 08:40:32.263130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-05-15 08:40:32.263398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-05-15 08:40:32.263430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.697 [2024-05-15 08:40:32.263453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.697 [2024-05-15 08:40:32.263771] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.697 [2024-05-15 08:40:32.263945] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.697 [2024-05-15 08:40:32.263953] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.697 [2024-05-15 08:40:32.263960] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.697 [2024-05-15 08:40:32.266677] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.697 [2024-05-15 08:40:32.275624] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.697 [2024-05-15 08:40:32.276061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-05-15 08:40:32.276195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-05-15 08:40:32.276208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.697 [2024-05-15 08:40:32.276215] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.697 [2024-05-15 08:40:32.276390] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.697 [2024-05-15 08:40:32.276564] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.697 [2024-05-15 08:40:32.276573] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.697 [2024-05-15 08:40:32.276580] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.697 [2024-05-15 08:40:32.279295] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.697 [2024-05-15 08:40:32.288565] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.697 [2024-05-15 08:40:32.288934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-05-15 08:40:32.289107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-05-15 08:40:32.289116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.697 [2024-05-15 08:40:32.289123] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.697 [2024-05-15 08:40:32.289305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.697 [2024-05-15 08:40:32.289484] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.697 [2024-05-15 08:40:32.289492] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.697 [2024-05-15 08:40:32.289498] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.697 [2024-05-15 08:40:32.292218] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.697 [2024-05-15 08:40:32.301473] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.697 [2024-05-15 08:40:32.301962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-05-15 08:40:32.302182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-05-15 08:40:32.302193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.697 [2024-05-15 08:40:32.302200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.697 [2024-05-15 08:40:32.302374] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.697 [2024-05-15 08:40:32.302549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.697 [2024-05-15 08:40:32.302557] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.697 [2024-05-15 08:40:32.302563] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.697 [2024-05-15 08:40:32.305281] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.697 [2024-05-15 08:40:32.314393] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.697 [2024-05-15 08:40:32.314848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-05-15 08:40:32.315082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-05-15 08:40:32.315113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.697 [2024-05-15 08:40:32.315135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.697 [2024-05-15 08:40:32.315666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.697 [2024-05-15 08:40:32.315841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.697 [2024-05-15 08:40:32.315849] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.697 [2024-05-15 08:40:32.315855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.697 [2024-05-15 08:40:32.318575] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.697 [2024-05-15 08:40:32.327368] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.697 [2024-05-15 08:40:32.327729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-05-15 08:40:32.327983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-05-15 08:40:32.327994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.697 [2024-05-15 08:40:32.328001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.697 [2024-05-15 08:40:32.328186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.697 [2024-05-15 08:40:32.328366] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.697 [2024-05-15 08:40:32.328374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.697 [2024-05-15 08:40:32.328381] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.697 [2024-05-15 08:40:32.331248] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.697 [2024-05-15 08:40:32.340569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.697 [2024-05-15 08:40:32.340987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-05-15 08:40:32.341141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-05-15 08:40:32.341152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.697 [2024-05-15 08:40:32.341159] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.697 [2024-05-15 08:40:32.341343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.697 [2024-05-15 08:40:32.341523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.697 [2024-05-15 08:40:32.341531] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.697 [2024-05-15 08:40:32.341538] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.697 [2024-05-15 08:40:32.344406] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.697 [2024-05-15 08:40:32.353667] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.697 [2024-05-15 08:40:32.354111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-05-15 08:40:32.354281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-05-15 08:40:32.354314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.697 [2024-05-15 08:40:32.354336] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.697 [2024-05-15 08:40:32.354741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.697 [2024-05-15 08:40:32.354916] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.697 [2024-05-15 08:40:32.354924] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.697 [2024-05-15 08:40:32.354930] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.697 [2024-05-15 08:40:32.357709] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.697 [2024-05-15 08:40:32.366743] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.697 [2024-05-15 08:40:32.367155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-05-15 08:40:32.367292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-05-15 08:40:32.367302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.698 [2024-05-15 08:40:32.367309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.698 [2024-05-15 08:40:32.367483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.698 [2024-05-15 08:40:32.367657] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.698 [2024-05-15 08:40:32.367665] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.698 [2024-05-15 08:40:32.367671] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.698 [2024-05-15 08:40:32.370455] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.698 [2024-05-15 08:40:32.379679] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.698 [2024-05-15 08:40:32.380100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-05-15 08:40:32.380210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-05-15 08:40:32.380221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.698 [2024-05-15 08:40:32.380228] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.698 [2024-05-15 08:40:32.380402] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.698 [2024-05-15 08:40:32.380576] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.698 [2024-05-15 08:40:32.380585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.698 [2024-05-15 08:40:32.380591] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.698 [2024-05-15 08:40:32.383311] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.698 [2024-05-15 08:40:32.392574] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.698 [2024-05-15 08:40:32.393018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-05-15 08:40:32.393200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-05-15 08:40:32.393233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.698 [2024-05-15 08:40:32.393256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.698 [2024-05-15 08:40:32.393803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.698 [2024-05-15 08:40:32.393979] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.698 [2024-05-15 08:40:32.393987] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.698 [2024-05-15 08:40:32.393994] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.698 [2024-05-15 08:40:32.396733] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.698 [2024-05-15 08:40:32.405535] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.698 [2024-05-15 08:40:32.406010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-05-15 08:40:32.406156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-05-15 08:40:32.406178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.698 [2024-05-15 08:40:32.406189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.698 [2024-05-15 08:40:32.406364] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.698 [2024-05-15 08:40:32.406538] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.698 [2024-05-15 08:40:32.406547] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.698 [2024-05-15 08:40:32.406553] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.698 [2024-05-15 08:40:32.409272] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.698 [2024-05-15 08:40:32.418421] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.698 [2024-05-15 08:40:32.418770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-05-15 08:40:32.418873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-05-15 08:40:32.418883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.698 [2024-05-15 08:40:32.418891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.698 [2024-05-15 08:40:32.419065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.698 [2024-05-15 08:40:32.419246] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.698 [2024-05-15 08:40:32.419255] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.698 [2024-05-15 08:40:32.419261] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.698 [2024-05-15 08:40:32.421978] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.698 [2024-05-15 08:40:32.431260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.698 [2024-05-15 08:40:32.431637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-05-15 08:40:32.431724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-05-15 08:40:32.431734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.698 [2024-05-15 08:40:32.431741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.698 [2024-05-15 08:40:32.431915] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.698 [2024-05-15 08:40:32.432089] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.698 [2024-05-15 08:40:32.432097] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.698 [2024-05-15 08:40:32.432103] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.698 [2024-05-15 08:40:32.434907] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.698 [2024-05-15 08:40:32.444213] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.698 [2024-05-15 08:40:32.444587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-05-15 08:40:32.444773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-05-15 08:40:32.444803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.698 [2024-05-15 08:40:32.444832] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.698 [2024-05-15 08:40:32.445429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.698 [2024-05-15 08:40:32.445731] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.698 [2024-05-15 08:40:32.445740] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.698 [2024-05-15 08:40:32.445747] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.698 [2024-05-15 08:40:32.448462] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.698 [2024-05-15 08:40:32.457085] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.698 [2024-05-15 08:40:32.457514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-05-15 08:40:32.457766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-05-15 08:40:32.457797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.699 [2024-05-15 08:40:32.457819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.699 [2024-05-15 08:40:32.458136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.699 [2024-05-15 08:40:32.458317] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.699 [2024-05-15 08:40:32.458326] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.699 [2024-05-15 08:40:32.458332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.699 [2024-05-15 08:40:32.461035] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.699 [2024-05-15 08:40:32.469970] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.699 [2024-05-15 08:40:32.470413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-05-15 08:40:32.470665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-05-15 08:40:32.470675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.699 [2024-05-15 08:40:32.470682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.699 [2024-05-15 08:40:32.470855] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.699 [2024-05-15 08:40:32.471029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.699 [2024-05-15 08:40:32.471037] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.699 [2024-05-15 08:40:32.471043] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.699 [2024-05-15 08:40:32.473754] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.699 [2024-05-15 08:40:32.482837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.699 [2024-05-15 08:40:32.483234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-05-15 08:40:32.483382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-05-15 08:40:32.483392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.699 [2024-05-15 08:40:32.483399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.699 [2024-05-15 08:40:32.483576] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.699 [2024-05-15 08:40:32.483750] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.699 [2024-05-15 08:40:32.483758] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.699 [2024-05-15 08:40:32.483764] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.699 [2024-05-15 08:40:32.486477] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.699 [2024-05-15 08:40:32.495737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.699 [2024-05-15 08:40:32.496138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-05-15 08:40:32.496410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-05-15 08:40:32.496442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.699 [2024-05-15 08:40:32.496464] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.699 [2024-05-15 08:40:32.496865] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.699 [2024-05-15 08:40:32.497039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.699 [2024-05-15 08:40:32.497047] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.699 [2024-05-15 08:40:32.497053] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.699 [2024-05-15 08:40:32.499764] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.699 [2024-05-15 08:40:32.508699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.699 [2024-05-15 08:40:32.509102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-05-15 08:40:32.509299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-05-15 08:40:32.509311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.699 [2024-05-15 08:40:32.509318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.699 [2024-05-15 08:40:32.509492] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.699 [2024-05-15 08:40:32.509666] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.699 [2024-05-15 08:40:32.509674] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.699 [2024-05-15 08:40:32.509680] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.699 [2024-05-15 08:40:32.512389] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.699 [2024-05-15 08:40:32.521531] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.699 [2024-05-15 08:40:32.521950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-05-15 08:40:32.522095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-05-15 08:40:32.522105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.699 [2024-05-15 08:40:32.522112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.699 [2024-05-15 08:40:32.522295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.699 [2024-05-15 08:40:32.522473] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.699 [2024-05-15 08:40:32.522481] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.699 [2024-05-15 08:40:32.522487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.699 [2024-05-15 08:40:32.525194] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.699 [2024-05-15 08:40:32.534569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.699 [2024-05-15 08:40:32.534990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-05-15 08:40:32.535213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-05-15 08:40:32.535225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.699 [2024-05-15 08:40:32.535232] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.699 [2024-05-15 08:40:32.535406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.699 [2024-05-15 08:40:32.535582] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.699 [2024-05-15 08:40:32.535591] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.699 [2024-05-15 08:40:32.535597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.699 [2024-05-15 08:40:32.538304] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.699 [2024-05-15 08:40:32.547438] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.699 [2024-05-15 08:40:32.547830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-05-15 08:40:32.548055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-05-15 08:40:32.548064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.699 [2024-05-15 08:40:32.548071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.699 [2024-05-15 08:40:32.548262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.699 [2024-05-15 08:40:32.548437] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.699 [2024-05-15 08:40:32.548445] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.699 [2024-05-15 08:40:32.548451] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.699 [2024-05-15 08:40:32.551154] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.699 [2024-05-15 08:40:32.560384] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.699 [2024-05-15 08:40:32.560779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-05-15 08:40:32.561000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-05-15 08:40:32.561010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.699 [2024-05-15 08:40:32.561017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.699 [2024-05-15 08:40:32.561198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.699 [2024-05-15 08:40:32.561373] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.699 [2024-05-15 08:40:32.561384] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.699 [2024-05-15 08:40:32.561390] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.699 [2024-05-15 08:40:32.564096] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.699 [2024-05-15 08:40:32.573322] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.700 [2024-05-15 08:40:32.573746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-05-15 08:40:32.573993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-05-15 08:40:32.574023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.700 [2024-05-15 08:40:32.574045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.700 [2024-05-15 08:40:32.574313] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.700 [2024-05-15 08:40:32.574489] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.700 [2024-05-15 08:40:32.574497] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.700 [2024-05-15 08:40:32.574503] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.700 [2024-05-15 08:40:32.577210] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.700 [2024-05-15 08:40:32.586136] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.700 [2024-05-15 08:40:32.586566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-05-15 08:40:32.586764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-05-15 08:40:32.586774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.700 [2024-05-15 08:40:32.586781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.700 [2024-05-15 08:40:32.586960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.700 [2024-05-15 08:40:32.587140] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.700 [2024-05-15 08:40:32.587148] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.700 [2024-05-15 08:40:32.587154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.700 [2024-05-15 08:40:32.590019] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.700 [2024-05-15 08:40:32.599254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.700 [2024-05-15 08:40:32.599688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-05-15 08:40:32.599891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-05-15 08:40:32.599922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.700 [2024-05-15 08:40:32.599944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.700 [2024-05-15 08:40:32.600481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.700 [2024-05-15 08:40:32.600655] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.700 [2024-05-15 08:40:32.600664] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.700 [2024-05-15 08:40:32.600673] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.700 [2024-05-15 08:40:32.603473] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.700 [2024-05-15 08:40:32.612285] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.700 [2024-05-15 08:40:32.612707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-05-15 08:40:32.612849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-05-15 08:40:32.612860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.700 [2024-05-15 08:40:32.612867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.700 [2024-05-15 08:40:32.613041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.700 [2024-05-15 08:40:32.613220] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.700 [2024-05-15 08:40:32.613228] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.700 [2024-05-15 08:40:32.613234] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.700 [2024-05-15 08:40:32.615941] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.700 [2024-05-15 08:40:32.625186] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.700 [2024-05-15 08:40:32.625612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-05-15 08:40:32.625787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-05-15 08:40:32.625817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.700 [2024-05-15 08:40:32.625837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.700 [2024-05-15 08:40:32.626440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.700 [2024-05-15 08:40:32.626699] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.700 [2024-05-15 08:40:32.626710] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.700 [2024-05-15 08:40:32.626719] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.700 [2024-05-15 08:40:32.630821] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.700 [2024-05-15 08:40:32.638664] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.700 [2024-05-15 08:40:32.639009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-05-15 08:40:32.639229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-05-15 08:40:32.639241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.700 [2024-05-15 08:40:32.639248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.700 [2024-05-15 08:40:32.639423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.700 [2024-05-15 08:40:32.639597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.700 [2024-05-15 08:40:32.639605] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.700 [2024-05-15 08:40:32.639611] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.700 [2024-05-15 08:40:32.642360] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.700 [2024-05-15 08:40:32.651619] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.700 [2024-05-15 08:40:32.651993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-05-15 08:40:32.652217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-05-15 08:40:32.652229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.700 [2024-05-15 08:40:32.652235] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.700 [2024-05-15 08:40:32.652401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.700 [2024-05-15 08:40:32.652565] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.700 [2024-05-15 08:40:32.652573] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.700 [2024-05-15 08:40:32.652578] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.700 [2024-05-15 08:40:32.655269] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.700 [2024-05-15 08:40:32.664505] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.700 [2024-05-15 08:40:32.664918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-05-15 08:40:32.665144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-05-15 08:40:32.665153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.700 [2024-05-15 08:40:32.665160] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.700 [2024-05-15 08:40:32.665342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.700 [2024-05-15 08:40:32.665516] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.700 [2024-05-15 08:40:32.665524] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.700 [2024-05-15 08:40:32.665530] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.700 [2024-05-15 08:40:32.668233] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.700 [2024-05-15 08:40:32.677467] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.700 [2024-05-15 08:40:32.677859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-05-15 08:40:32.678051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-05-15 08:40:32.678060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.700 [2024-05-15 08:40:32.678067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.700 [2024-05-15 08:40:32.678256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.700 [2024-05-15 08:40:32.678431] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.700 [2024-05-15 08:40:32.678438] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.700 [2024-05-15 08:40:32.678444] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.700 [2024-05-15 08:40:32.681151] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.700 [2024-05-15 08:40:32.690394] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.700 [2024-05-15 08:40:32.690794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-05-15 08:40:32.690966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-05-15 08:40:32.690975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.701 [2024-05-15 08:40:32.690981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.701 [2024-05-15 08:40:32.691146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.701 [2024-05-15 08:40:32.691341] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.701 [2024-05-15 08:40:32.691351] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.701 [2024-05-15 08:40:32.691357] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.701 [2024-05-15 08:40:32.694096] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.701 [2024-05-15 08:40:32.703342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.701 [2024-05-15 08:40:32.703766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-05-15 08:40:32.704037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-05-15 08:40:32.704067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.701 [2024-05-15 08:40:32.704089] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.701 [2024-05-15 08:40:32.704424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.701 [2024-05-15 08:40:32.704600] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.701 [2024-05-15 08:40:32.704608] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.701 [2024-05-15 08:40:32.704614] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.701 [2024-05-15 08:40:32.707527] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.961 [2024-05-15 08:40:32.716499] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.961 [2024-05-15 08:40:32.716919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.961 [2024-05-15 08:40:32.717151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.961 [2024-05-15 08:40:32.717161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.961 [2024-05-15 08:40:32.717176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.961 [2024-05-15 08:40:32.717357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.961 [2024-05-15 08:40:32.717537] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.961 [2024-05-15 08:40:32.717545] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.961 [2024-05-15 08:40:32.717552] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.961 [2024-05-15 08:40:32.720421] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.961 [2024-05-15 08:40:32.729495] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.961 [2024-05-15 08:40:32.729918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.961 [2024-05-15 08:40:32.730117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.961 [2024-05-15 08:40:32.730128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.961 [2024-05-15 08:40:32.730136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.961 [2024-05-15 08:40:32.730340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.961 [2024-05-15 08:40:32.730515] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.961 [2024-05-15 08:40:32.730523] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.961 [2024-05-15 08:40:32.730529] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.961 [2024-05-15 08:40:32.733317] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.961 [2024-05-15 08:40:32.742419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.961 [2024-05-15 08:40:32.742719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.961 [2024-05-15 08:40:32.742894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.961 [2024-05-15 08:40:32.742904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.961 [2024-05-15 08:40:32.742910] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.961 [2024-05-15 08:40:32.743084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.961 [2024-05-15 08:40:32.743265] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.961 [2024-05-15 08:40:32.743274] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.961 [2024-05-15 08:40:32.743280] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.961 [2024-05-15 08:40:32.746034] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.961 [2024-05-15 08:40:32.755285] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.961 [2024-05-15 08:40:32.755750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.961 [2024-05-15 08:40:32.756010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.961 [2024-05-15 08:40:32.756041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.961 [2024-05-15 08:40:32.756064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.961 [2024-05-15 08:40:32.756663] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.961 [2024-05-15 08:40:32.756838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.961 [2024-05-15 08:40:32.756846] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.961 [2024-05-15 08:40:32.756853] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.961 [2024-05-15 08:40:32.759572] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.961 [2024-05-15 08:40:32.768173] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.961 [2024-05-15 08:40:32.768512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.961 [2024-05-15 08:40:32.768730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.961 [2024-05-15 08:40:32.768745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.961 [2024-05-15 08:40:32.768752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.961 [2024-05-15 08:40:32.768925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.961 [2024-05-15 08:40:32.769099] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.961 [2024-05-15 08:40:32.769106] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.961 [2024-05-15 08:40:32.769112] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.961 [2024-05-15 08:40:32.771824] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.961 [2024-05-15 08:40:32.781055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.962 [2024-05-15 08:40:32.781479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.962 [2024-05-15 08:40:32.781696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.962 [2024-05-15 08:40:32.781706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.962 [2024-05-15 08:40:32.781713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.962 [2024-05-15 08:40:32.781886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.962 [2024-05-15 08:40:32.782060] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.962 [2024-05-15 08:40:32.782068] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.962 [2024-05-15 08:40:32.782075] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.962 [2024-05-15 08:40:32.784783] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.962 [2024-05-15 08:40:32.793971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.962 [2024-05-15 08:40:32.794407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.962 [2024-05-15 08:40:32.794550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.962 [2024-05-15 08:40:32.794560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.962 [2024-05-15 08:40:32.794567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.962 [2024-05-15 08:40:32.794741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.962 [2024-05-15 08:40:32.794916] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.962 [2024-05-15 08:40:32.794924] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.962 [2024-05-15 08:40:32.794930] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.962 [2024-05-15 08:40:32.797639] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.962 [2024-05-15 08:40:32.806816] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.962 [2024-05-15 08:40:32.807233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.962 [2024-05-15 08:40:32.807406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.962 [2024-05-15 08:40:32.807416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.962 [2024-05-15 08:40:32.807426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.962 [2024-05-15 08:40:32.807600] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.962 [2024-05-15 08:40:32.807774] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.962 [2024-05-15 08:40:32.807782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.962 [2024-05-15 08:40:32.807788] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.962 [2024-05-15 08:40:32.810503] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.962 [2024-05-15 08:40:32.819723] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.962 [2024-05-15 08:40:32.820122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.962 [2024-05-15 08:40:32.820340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.962 [2024-05-15 08:40:32.820351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.962 [2024-05-15 08:40:32.820358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.962 [2024-05-15 08:40:32.820524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.962 [2024-05-15 08:40:32.820689] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.962 [2024-05-15 08:40:32.820696] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.962 [2024-05-15 08:40:32.820702] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.962 [2024-05-15 08:40:32.823398] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.962 [2024-05-15 08:40:32.832643] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.962 [2024-05-15 08:40:32.833060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.962 [2024-05-15 08:40:32.833281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.962 [2024-05-15 08:40:32.833292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.962 [2024-05-15 08:40:32.833299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.962 [2024-05-15 08:40:32.833473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.962 [2024-05-15 08:40:32.833647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.962 [2024-05-15 08:40:32.833655] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.962 [2024-05-15 08:40:32.833661] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.962 [2024-05-15 08:40:32.836373] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.962 [2024-05-15 08:40:32.845846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.962 [2024-05-15 08:40:32.846266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.962 [2024-05-15 08:40:32.846491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.962 [2024-05-15 08:40:32.846522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.962 [2024-05-15 08:40:32.846543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.962 [2024-05-15 08:40:32.847048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.962 [2024-05-15 08:40:32.847257] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.962 [2024-05-15 08:40:32.847266] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.962 [2024-05-15 08:40:32.847272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.962 [2024-05-15 08:40:32.850059] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.962 [2024-05-15 08:40:32.858924] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.962 [2024-05-15 08:40:32.859343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.962 [2024-05-15 08:40:32.859570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.962 [2024-05-15 08:40:32.859601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.962 [2024-05-15 08:40:32.859622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.962 [2024-05-15 08:40:32.860130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.962 [2024-05-15 08:40:32.860323] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.962 [2024-05-15 08:40:32.860333] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.962 [2024-05-15 08:40:32.860339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.962 [2024-05-15 08:40:32.863082] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.962 [2024-05-15 08:40:32.871860] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.962 [2024-05-15 08:40:32.872253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.962 [2024-05-15 08:40:32.872450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.962 [2024-05-15 08:40:32.872460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.962 [2024-05-15 08:40:32.872467] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.962 [2024-05-15 08:40:32.872641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.962 [2024-05-15 08:40:32.872814] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.962 [2024-05-15 08:40:32.872822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.962 [2024-05-15 08:40:32.872828] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.962 [2024-05-15 08:40:32.875549] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.962 [2024-05-15 08:40:32.884784] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.962 [2024-05-15 08:40:32.885175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.962 [2024-05-15 08:40:32.885388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.962 [2024-05-15 08:40:32.885419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.962 [2024-05-15 08:40:32.885440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.962 [2024-05-15 08:40:32.885986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.962 [2024-05-15 08:40:32.886155] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.962 [2024-05-15 08:40:32.886170] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.962 [2024-05-15 08:40:32.886179] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.962 [2024-05-15 08:40:32.888905] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.963 [2024-05-15 08:40:32.897896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.963 [2024-05-15 08:40:32.898298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.963 [2024-05-15 08:40:32.898524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.963 [2024-05-15 08:40:32.898534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.963 [2024-05-15 08:40:32.898541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.963 [2024-05-15 08:40:32.898715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.963 [2024-05-15 08:40:32.898889] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.963 [2024-05-15 08:40:32.898897] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.963 [2024-05-15 08:40:32.898903] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.963 [2024-05-15 08:40:32.901615] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.963 [2024-05-15 08:40:32.910819] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.963 [2024-05-15 08:40:32.911218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.963 [2024-05-15 08:40:32.911440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.963 [2024-05-15 08:40:32.911471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.963 [2024-05-15 08:40:32.911492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.963 [2024-05-15 08:40:32.912034] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.963 [2024-05-15 08:40:32.912222] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.963 [2024-05-15 08:40:32.912231] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.963 [2024-05-15 08:40:32.912237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.963 [2024-05-15 08:40:32.914943] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.963 [2024-05-15 08:40:32.923714] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.963 [2024-05-15 08:40:32.924114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.963 [2024-05-15 08:40:32.924314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.963 [2024-05-15 08:40:32.924326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.963 [2024-05-15 08:40:32.924333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.963 [2024-05-15 08:40:32.924507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.963 [2024-05-15 08:40:32.924681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.963 [2024-05-15 08:40:32.924691] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.963 [2024-05-15 08:40:32.924698] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.963 [2024-05-15 08:40:32.927409] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.963 [2024-05-15 08:40:32.936556] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.963 [2024-05-15 08:40:32.936971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.963 [2024-05-15 08:40:32.937201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.963 [2024-05-15 08:40:32.937240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.963 [2024-05-15 08:40:32.937262] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.963 [2024-05-15 08:40:32.937850] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.963 [2024-05-15 08:40:32.938023] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.963 [2024-05-15 08:40:32.938031] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.963 [2024-05-15 08:40:32.938038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.963 [2024-05-15 08:40:32.940841] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.963 [2024-05-15 08:40:32.949472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.963 [2024-05-15 08:40:32.949910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.963 [2024-05-15 08:40:32.950129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.963 [2024-05-15 08:40:32.950161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.963 [2024-05-15 08:40:32.950210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.963 [2024-05-15 08:40:32.950800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.963 [2024-05-15 08:40:32.951177] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.963 [2024-05-15 08:40:32.951186] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.963 [2024-05-15 08:40:32.951192] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.963 [2024-05-15 08:40:32.953898] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.963 [2024-05-15 08:40:32.962353] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.963 [2024-05-15 08:40:32.962774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.963 [2024-05-15 08:40:32.963006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.963 [2024-05-15 08:40:32.963038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.963 [2024-05-15 08:40:32.963059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.963 [2024-05-15 08:40:32.963667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.963 [2024-05-15 08:40:32.964161] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.963 [2024-05-15 08:40:32.964174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.963 [2024-05-15 08:40:32.964183] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.963 [2024-05-15 08:40:32.966888] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.963 [2024-05-15 08:40:32.975184] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.963 [2024-05-15 08:40:32.975636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.963 [2024-05-15 08:40:32.975900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.963 [2024-05-15 08:40:32.975931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:45.963 [2024-05-15 08:40:32.975953] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:45.963 [2024-05-15 08:40:32.976242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:45.963 [2024-05-15 08:40:32.976417] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.963 [2024-05-15 08:40:32.976425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.963 [2024-05-15 08:40:32.976431] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.963 [2024-05-15 08:40:32.979248] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.223 [2024-05-15 08:40:32.988077] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.223 [2024-05-15 08:40:32.988513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:32.988711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:32.988722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.223 [2024-05-15 08:40:32.988729] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.223 [2024-05-15 08:40:32.988910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.223 [2024-05-15 08:40:32.989096] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.223 [2024-05-15 08:40:32.989106] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.223 [2024-05-15 08:40:32.989112] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.223 [2024-05-15 08:40:32.991903] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.223 [2024-05-15 08:40:33.000917] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.223 [2024-05-15 08:40:33.001300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:33.001596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:33.001627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.223 [2024-05-15 08:40:33.001650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.223 [2024-05-15 08:40:33.002245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.223 [2024-05-15 08:40:33.002411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.223 [2024-05-15 08:40:33.002419] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.223 [2024-05-15 08:40:33.002425] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.223 [2024-05-15 08:40:33.005050] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.223 [2024-05-15 08:40:33.013826] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.223 [2024-05-15 08:40:33.014244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:33.014442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:33.014452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.223 [2024-05-15 08:40:33.014459] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.223 [2024-05-15 08:40:33.014632] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.223 [2024-05-15 08:40:33.014806] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.223 [2024-05-15 08:40:33.014814] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.223 [2024-05-15 08:40:33.014820] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.223 [2024-05-15 08:40:33.017537] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.223 [2024-05-15 08:40:33.026647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.223 [2024-05-15 08:40:33.027057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:33.027254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:33.027290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.223 [2024-05-15 08:40:33.027313] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.223 [2024-05-15 08:40:33.027899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.223 [2024-05-15 08:40:33.028360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.223 [2024-05-15 08:40:33.028368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.223 [2024-05-15 08:40:33.028375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.223 [2024-05-15 08:40:33.031078] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.223 [2024-05-15 08:40:33.039550] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.223 [2024-05-15 08:40:33.039966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:33.040190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:33.040202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.223 [2024-05-15 08:40:33.040209] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.223 [2024-05-15 08:40:33.040384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.223 [2024-05-15 08:40:33.040557] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.223 [2024-05-15 08:40:33.040565] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.223 [2024-05-15 08:40:33.040571] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.223 [2024-05-15 08:40:33.043283] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.223 [2024-05-15 08:40:33.052465] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.223 [2024-05-15 08:40:33.052902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:33.053190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:33.053229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.223 [2024-05-15 08:40:33.053251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.223 [2024-05-15 08:40:33.053520] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.223 [2024-05-15 08:40:33.053695] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.223 [2024-05-15 08:40:33.053703] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.223 [2024-05-15 08:40:33.053709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.223 [2024-05-15 08:40:33.056375] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.223 [2024-05-15 08:40:33.065358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.223 [2024-05-15 08:40:33.065773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:33.065968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:33.065978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.223 [2024-05-15 08:40:33.065985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.223 [2024-05-15 08:40:33.066158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.223 [2024-05-15 08:40:33.066337] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.223 [2024-05-15 08:40:33.066345] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.223 [2024-05-15 08:40:33.066351] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.223 [2024-05-15 08:40:33.069134] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.223 [2024-05-15 08:40:33.078200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.223 [2024-05-15 08:40:33.078596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:33.078766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:33.078777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.223 [2024-05-15 08:40:33.078783] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.223 [2024-05-15 08:40:33.078956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.223 [2024-05-15 08:40:33.079130] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.223 [2024-05-15 08:40:33.079138] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.223 [2024-05-15 08:40:33.079145] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.223 [2024-05-15 08:40:33.081921] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.223 [2024-05-15 08:40:33.091177] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.223 [2024-05-15 08:40:33.091586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:33.091788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:33.091798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.223 [2024-05-15 08:40:33.091805] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.223 [2024-05-15 08:40:33.091984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.223 [2024-05-15 08:40:33.092163] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.223 [2024-05-15 08:40:33.092181] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.223 [2024-05-15 08:40:33.092188] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.223 [2024-05-15 08:40:33.095047] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.223 [2024-05-15 08:40:33.104319] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.223 [2024-05-15 08:40:33.104745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:33.104965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-05-15 08:40:33.104996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.223 [2024-05-15 08:40:33.105018] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.223 [2024-05-15 08:40:33.105440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.223 [2024-05-15 08:40:33.105620] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.223 [2024-05-15 08:40:33.105628] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.223 [2024-05-15 08:40:33.105634] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.223 [2024-05-15 08:40:33.108476] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.223 [2024-05-15 08:40:33.117291] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.224 [2024-05-15 08:40:33.117706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.117926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.117936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.224 [2024-05-15 08:40:33.117942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.224 [2024-05-15 08:40:33.118117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.224 [2024-05-15 08:40:33.118305] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.224 [2024-05-15 08:40:33.118315] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.224 [2024-05-15 08:40:33.118321] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.224 [2024-05-15 08:40:33.121098] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.224 [2024-05-15 08:40:33.130219] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.224 [2024-05-15 08:40:33.130651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.130898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.130937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.224 [2024-05-15 08:40:33.130959] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.224 [2024-05-15 08:40:33.131564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.224 [2024-05-15 08:40:33.131881] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.224 [2024-05-15 08:40:33.131889] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.224 [2024-05-15 08:40:33.131895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.224 [2024-05-15 08:40:33.135780] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.224 [2024-05-15 08:40:33.143899] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.224 [2024-05-15 08:40:33.144312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.144580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.144611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.224 [2024-05-15 08:40:33.144633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.224 [2024-05-15 08:40:33.144945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.224 [2024-05-15 08:40:33.145113] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.224 [2024-05-15 08:40:33.145121] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.224 [2024-05-15 08:40:33.145127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.224 [2024-05-15 08:40:33.147891] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.224 [2024-05-15 08:40:33.156775] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.224 [2024-05-15 08:40:33.157203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.157403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.157434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.224 [2024-05-15 08:40:33.157457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.224 [2024-05-15 08:40:33.157741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.224 [2024-05-15 08:40:33.157915] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.224 [2024-05-15 08:40:33.157923] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.224 [2024-05-15 08:40:33.157929] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.224 [2024-05-15 08:40:33.160647] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.224 [2024-05-15 08:40:33.169747] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.224 [2024-05-15 08:40:33.170195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.170464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.170495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.224 [2024-05-15 08:40:33.170524] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.224 [2024-05-15 08:40:33.170768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.224 [2024-05-15 08:40:33.170932] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.224 [2024-05-15 08:40:33.170940] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.224 [2024-05-15 08:40:33.170946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.224 [2024-05-15 08:40:33.173663] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.224 [2024-05-15 08:40:33.182668] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.224 [2024-05-15 08:40:33.183016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.183175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.183186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.224 [2024-05-15 08:40:33.183192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.224 [2024-05-15 08:40:33.183382] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.224 [2024-05-15 08:40:33.183556] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.224 [2024-05-15 08:40:33.183564] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.224 [2024-05-15 08:40:33.183570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.224 [2024-05-15 08:40:33.186287] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.224 [2024-05-15 08:40:33.195526] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.224 [2024-05-15 08:40:33.195990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.196137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.196147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.224 [2024-05-15 08:40:33.196154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.224 [2024-05-15 08:40:33.196333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.224 [2024-05-15 08:40:33.196508] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.224 [2024-05-15 08:40:33.196516] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.224 [2024-05-15 08:40:33.196522] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.224 [2024-05-15 08:40:33.199246] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.224 [2024-05-15 08:40:33.208358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.224 [2024-05-15 08:40:33.208716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.208940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.208950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.224 [2024-05-15 08:40:33.208957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.224 [2024-05-15 08:40:33.209134] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.224 [2024-05-15 08:40:33.209317] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.224 [2024-05-15 08:40:33.209327] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.224 [2024-05-15 08:40:33.209333] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.224 [2024-05-15 08:40:33.212042] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.224 [2024-05-15 08:40:33.221294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.224 [2024-05-15 08:40:33.221719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.221901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.221911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.224 [2024-05-15 08:40:33.221918] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.224 [2024-05-15 08:40:33.222091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.224 [2024-05-15 08:40:33.222273] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.224 [2024-05-15 08:40:33.222283] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.224 [2024-05-15 08:40:33.222289] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.224 [2024-05-15 08:40:33.224999] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.224 [2024-05-15 08:40:33.234122] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.224 [2024-05-15 08:40:33.234466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.234660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.224 [2024-05-15 08:40:33.234691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.224 [2024-05-15 08:40:33.234714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.224 [2024-05-15 08:40:33.235320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.224 [2024-05-15 08:40:33.235909] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.224 [2024-05-15 08:40:33.235917] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.224 [2024-05-15 08:40:33.235923] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.224 [2024-05-15 08:40:33.238550] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.483 [2024-05-15 08:40:33.247239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.483 [2024-05-15 08:40:33.247608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.483 [2024-05-15 08:40:33.247738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.483 [2024-05-15 08:40:33.247747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.483 [2024-05-15 08:40:33.247754] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.483 [2024-05-15 08:40:33.247919] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.483 [2024-05-15 08:40:33.248087] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.483 [2024-05-15 08:40:33.248095] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.483 [2024-05-15 08:40:33.248101] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.483 [2024-05-15 08:40:33.251011] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.483 [2024-05-15 08:40:33.260163] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.483 [2024-05-15 08:40:33.260585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.483 [2024-05-15 08:40:33.260808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.483 [2024-05-15 08:40:33.260817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.483 [2024-05-15 08:40:33.260824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.483 [2024-05-15 08:40:33.260988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.483 [2024-05-15 08:40:33.261152] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.483 [2024-05-15 08:40:33.261160] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.483 [2024-05-15 08:40:33.261175] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.483 [2024-05-15 08:40:33.263901] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.483 [2024-05-15 08:40:33.272991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.483 [2024-05-15 08:40:33.273422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.483 [2024-05-15 08:40:33.273694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.483 [2024-05-15 08:40:33.273725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.483 [2024-05-15 08:40:33.273746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.483 [2024-05-15 08:40:33.274080] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.483 [2024-05-15 08:40:33.274263] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.483 [2024-05-15 08:40:33.274273] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.483 [2024-05-15 08:40:33.274279] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.483 [2024-05-15 08:40:33.276987] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.483 [2024-05-15 08:40:33.285933] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.483 [2024-05-15 08:40:33.286279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.483 [2024-05-15 08:40:33.286501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.483 [2024-05-15 08:40:33.286511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.483 [2024-05-15 08:40:33.286518] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.483 [2024-05-15 08:40:33.286685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.483 [2024-05-15 08:40:33.286850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.483 [2024-05-15 08:40:33.286861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.483 [2024-05-15 08:40:33.286867] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.483 [2024-05-15 08:40:33.289583] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.483 [2024-05-15 08:40:33.298828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.483 [2024-05-15 08:40:33.299222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.483 [2024-05-15 08:40:33.299422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.483 [2024-05-15 08:40:33.299449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.483 [2024-05-15 08:40:33.299472] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.483 [2024-05-15 08:40:33.300024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.483 [2024-05-15 08:40:33.300212] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.483 [2024-05-15 08:40:33.300221] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.483 [2024-05-15 08:40:33.300227] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.483 [2024-05-15 08:40:33.302936] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.483 [2024-05-15 08:40:33.311650] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.483 [2024-05-15 08:40:33.312103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-05-15 08:40:33.312302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-05-15 08:40:33.312336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-05-15 08:40:33.312358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.484 [2024-05-15 08:40:33.312942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.484 [2024-05-15 08:40:33.313318] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-05-15 08:40:33.313330] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-05-15 08:40:33.313339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-05-15 08:40:33.317450] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-05-15 08:40:33.325455] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-05-15 08:40:33.325898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-05-15 08:40:33.326038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-05-15 08:40:33.326049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-05-15 08:40:33.326055] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.484 [2024-05-15 08:40:33.326237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.484 [2024-05-15 08:40:33.326412] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-05-15 08:40:33.326420] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-05-15 08:40:33.326429] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-05-15 08:40:33.329177] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-05-15 08:40:33.338295] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-05-15 08:40:33.338751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-05-15 08:40:33.339020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-05-15 08:40:33.339050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-05-15 08:40:33.339072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.484 [2024-05-15 08:40:33.339414] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.484 [2024-05-15 08:40:33.339589] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-05-15 08:40:33.339597] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-05-15 08:40:33.339603] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-05-15 08:40:33.342313] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-05-15 08:40:33.351468] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-05-15 08:40:33.351905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-05-15 08:40:33.352009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-05-15 08:40:33.352019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-05-15 08:40:33.352026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.484 [2024-05-15 08:40:33.352210] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.484 [2024-05-15 08:40:33.352388] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-05-15 08:40:33.352396] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-05-15 08:40:33.352402] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-05-15 08:40:33.355250] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-05-15 08:40:33.364482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-05-15 08:40:33.364933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-05-15 08:40:33.365081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-05-15 08:40:33.365091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-05-15 08:40:33.365098] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.484 [2024-05-15 08:40:33.365279] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.484 [2024-05-15 08:40:33.365455] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-05-15 08:40:33.365463] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-05-15 08:40:33.365469] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-05-15 08:40:33.368184] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-05-15 08:40:33.377417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-05-15 08:40:33.377820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-05-15 08:40:33.378088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-05-15 08:40:33.378119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-05-15 08:40:33.378140] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.484 [2024-05-15 08:40:33.378452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.484 [2024-05-15 08:40:33.378626] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-05-15 08:40:33.378634] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-05-15 08:40:33.378640] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-05-15 08:40:33.381371] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-05-15 08:40:33.390403] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-05-15 08:40:33.390856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-05-15 08:40:33.391106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-05-15 08:40:33.391137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-05-15 08:40:33.391158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.484 [2024-05-15 08:40:33.391416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.484 [2024-05-15 08:40:33.391590] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-05-15 08:40:33.391598] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-05-15 08:40:33.391604] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-05-15 08:40:33.394312] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-05-15 08:40:33.403237] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-05-15 08:40:33.403656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-05-15 08:40:33.403851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-05-15 08:40:33.403860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-05-15 08:40:33.403867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.484 [2024-05-15 08:40:33.404031] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.484 [2024-05-15 08:40:33.404202] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-05-15 08:40:33.404210] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-05-15 08:40:33.404216] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-05-15 08:40:33.406841] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-05-15 08:40:33.416130] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-05-15 08:40:33.416451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-05-15 08:40:33.416672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-05-15 08:40:33.416681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-05-15 08:40:33.416688] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.484 [2024-05-15 08:40:33.416852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.484 [2024-05-15 08:40:33.417016] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-05-15 08:40:33.417024] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.485 [2024-05-15 08:40:33.417029] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.485 [2024-05-15 08:40:33.419751] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.485 [2024-05-15 08:40:33.428982] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.485 [2024-05-15 08:40:33.429388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-05-15 08:40:33.429584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-05-15 08:40:33.429614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.485 [2024-05-15 08:40:33.429636] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.485 [2024-05-15 08:40:33.430239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.485 [2024-05-15 08:40:33.430531] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.485 [2024-05-15 08:40:33.430539] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.485 [2024-05-15 08:40:33.430546] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.485 [2024-05-15 08:40:33.433256] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.485 [2024-05-15 08:40:33.441918] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.485 [2024-05-15 08:40:33.442260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-05-15 08:40:33.442413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-05-15 08:40:33.442424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.485 [2024-05-15 08:40:33.442431] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.485 [2024-05-15 08:40:33.442605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.485 [2024-05-15 08:40:33.442780] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.485 [2024-05-15 08:40:33.442788] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.485 [2024-05-15 08:40:33.442795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.485 [2024-05-15 08:40:33.445512] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.485 [2024-05-15 08:40:33.454786] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.485 [2024-05-15 08:40:33.455154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-05-15 08:40:33.455338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-05-15 08:40:33.455370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.485 [2024-05-15 08:40:33.455393] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.485 [2024-05-15 08:40:33.455980] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.485 [2024-05-15 08:40:33.456253] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.485 [2024-05-15 08:40:33.456263] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.485 [2024-05-15 08:40:33.456270] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.485 [2024-05-15 08:40:33.459047] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.485 [2024-05-15 08:40:33.467900] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.485 [2024-05-15 08:40:33.468312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-05-15 08:40:33.468397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-05-15 08:40:33.468408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.485 [2024-05-15 08:40:33.468415] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.485 [2024-05-15 08:40:33.468595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.485 [2024-05-15 08:40:33.468775] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.485 [2024-05-15 08:40:33.468784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.485 [2024-05-15 08:40:33.468790] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.485 [2024-05-15 08:40:33.471656] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.485 [2024-05-15 08:40:33.481146] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.485 [2024-05-15 08:40:33.481567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-05-15 08:40:33.481768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-05-15 08:40:33.481779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.485 [2024-05-15 08:40:33.481786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.485 [2024-05-15 08:40:33.481966] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.485 [2024-05-15 08:40:33.482146] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.485 [2024-05-15 08:40:33.482154] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.485 [2024-05-15 08:40:33.482161] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.485 [2024-05-15 08:40:33.485035] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.485 [2024-05-15 08:40:33.494278] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.485 [2024-05-15 08:40:33.494708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-05-15 08:40:33.494929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-05-15 08:40:33.494942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.485 [2024-05-15 08:40:33.494949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.485 [2024-05-15 08:40:33.495129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.485 [2024-05-15 08:40:33.495314] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.485 [2024-05-15 08:40:33.495322] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.485 [2024-05-15 08:40:33.495329] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.485 [2024-05-15 08:40:33.498200] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.746 [2024-05-15 08:40:33.507576] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.746 [2024-05-15 08:40:33.508021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.746 [2024-05-15 08:40:33.508222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.746 [2024-05-15 08:40:33.508235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.746 [2024-05-15 08:40:33.508243] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.746 [2024-05-15 08:40:33.508428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.746 [2024-05-15 08:40:33.508633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.746 [2024-05-15 08:40:33.508642] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.746 [2024-05-15 08:40:33.508650] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.746 [2024-05-15 08:40:33.511746] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.746 [2024-05-15 08:40:33.520879] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.746 [2024-05-15 08:40:33.521329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.746 [2024-05-15 08:40:33.521502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.746 [2024-05-15 08:40:33.521513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.746 [2024-05-15 08:40:33.521520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.746 [2024-05-15 08:40:33.521705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.746 [2024-05-15 08:40:33.521891] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.746 [2024-05-15 08:40:33.521899] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.746 [2024-05-15 08:40:33.521906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.746 [2024-05-15 08:40:33.524862] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.746 [2024-05-15 08:40:33.534106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.746 [2024-05-15 08:40:33.534571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.746 [2024-05-15 08:40:33.534772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.746 [2024-05-15 08:40:33.534784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.746 [2024-05-15 08:40:33.534794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.746 [2024-05-15 08:40:33.534991] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.746 [2024-05-15 08:40:33.535194] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.746 [2024-05-15 08:40:33.535204] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.746 [2024-05-15 08:40:33.535211] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.746 [2024-05-15 08:40:33.538249] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.746 [2024-05-15 08:40:33.547356] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.746 [2024-05-15 08:40:33.547807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.746 [2024-05-15 08:40:33.548026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.746 [2024-05-15 08:40:33.548037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.746 [2024-05-15 08:40:33.548044] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.746 [2024-05-15 08:40:33.548247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.746 [2024-05-15 08:40:33.548444] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.746 [2024-05-15 08:40:33.548453] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.746 [2024-05-15 08:40:33.548460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.746 [2024-05-15 08:40:33.551612] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.746 [2024-05-15 08:40:33.560738] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.746 [2024-05-15 08:40:33.561189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.746 [2024-05-15 08:40:33.561339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.746 [2024-05-15 08:40:33.561351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.746 [2024-05-15 08:40:33.561358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.746 [2024-05-15 08:40:33.561555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.746 [2024-05-15 08:40:33.561753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.746 [2024-05-15 08:40:33.561762] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.746 [2024-05-15 08:40:33.561768] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.746 [2024-05-15 08:40:33.564929] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.746 [2024-05-15 08:40:33.574218] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.746 [2024-05-15 08:40:33.574631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.746 [2024-05-15 08:40:33.574779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.746 [2024-05-15 08:40:33.574790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.746 [2024-05-15 08:40:33.574797] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.746 [2024-05-15 08:40:33.574985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.746 [2024-05-15 08:40:33.575178] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.746 [2024-05-15 08:40:33.575187] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.746 [2024-05-15 08:40:33.575194] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.746 [2024-05-15 08:40:33.578142] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.746 [2024-05-15 08:40:33.587385] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.746 [2024-05-15 08:40:33.587820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.746 [2024-05-15 08:40:33.587977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.746 [2024-05-15 08:40:33.588008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.746 [2024-05-15 08:40:33.588029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.746 [2024-05-15 08:40:33.588631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.746 [2024-05-15 08:40:33.588906] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.746 [2024-05-15 08:40:33.588914] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.746 [2024-05-15 08:40:33.588920] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.746 [2024-05-15 08:40:33.591791] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.746 [2024-05-15 08:40:33.600413] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.746 [2024-05-15 08:40:33.600730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.746 [2024-05-15 08:40:33.600883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.746 [2024-05-15 08:40:33.600893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.746 [2024-05-15 08:40:33.600900] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.746 [2024-05-15 08:40:33.601079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.746 [2024-05-15 08:40:33.601264] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.746 [2024-05-15 08:40:33.601273] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.746 [2024-05-15 08:40:33.601279] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.746 [2024-05-15 08:40:33.604149] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.746 [2024-05-15 08:40:33.613625] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.747 [2024-05-15 08:40:33.614020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.747 [2024-05-15 08:40:33.614212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.747 [2024-05-15 08:40:33.614244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.747 [2024-05-15 08:40:33.614266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.747 [2024-05-15 08:40:33.614701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.747 [2024-05-15 08:40:33.614878] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.747 [2024-05-15 08:40:33.614886] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.747 [2024-05-15 08:40:33.614892] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.747 [2024-05-15 08:40:33.617679] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.747 [2024-05-15 08:40:33.626719] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.747 [2024-05-15 08:40:33.627000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.747 [2024-05-15 08:40:33.627152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.747 [2024-05-15 08:40:33.627163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.747 [2024-05-15 08:40:33.627178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.747 [2024-05-15 08:40:33.627355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.747 [2024-05-15 08:40:33.627529] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.747 [2024-05-15 08:40:33.627537] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.747 [2024-05-15 08:40:33.627543] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.747 [2024-05-15 08:40:33.630328] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.747 [2024-05-15 08:40:33.639742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.747 [2024-05-15 08:40:33.640114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.747 [2024-05-15 08:40:33.640224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.747 [2024-05-15 08:40:33.640235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.747 [2024-05-15 08:40:33.640242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.747 [2024-05-15 08:40:33.640417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.747 [2024-05-15 08:40:33.640591] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.747 [2024-05-15 08:40:33.640600] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.747 [2024-05-15 08:40:33.640606] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.747 [2024-05-15 08:40:33.643343] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.747 [2024-05-15 08:40:33.652611] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.747 [2024-05-15 08:40:33.653057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.747 [2024-05-15 08:40:33.653247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.747 [2024-05-15 08:40:33.653281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.747 [2024-05-15 08:40:33.653303] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.747 [2024-05-15 08:40:33.653745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.747 [2024-05-15 08:40:33.653911] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.747 [2024-05-15 08:40:33.653921] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.747 [2024-05-15 08:40:33.653927] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.747 [2024-05-15 08:40:33.656647] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.747 [2024-05-15 08:40:33.665454] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.747 [2024-05-15 08:40:33.665822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.747 [2024-05-15 08:40:33.666044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.747 [2024-05-15 08:40:33.666054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.747 [2024-05-15 08:40:33.666061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.747 [2024-05-15 08:40:33.666242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.747 [2024-05-15 08:40:33.666417] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.747 [2024-05-15 08:40:33.666425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.747 [2024-05-15 08:40:33.666431] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.747 [2024-05-15 08:40:33.669148] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.747 [2024-05-15 08:40:33.678419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.747 [2024-05-15 08:40:33.678776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.747 [2024-05-15 08:40:33.678876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.747 [2024-05-15 08:40:33.678886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.747 [2024-05-15 08:40:33.678893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.747 [2024-05-15 08:40:33.679067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.747 [2024-05-15 08:40:33.679247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.747 [2024-05-15 08:40:33.679255] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.747 [2024-05-15 08:40:33.679261] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.747 [2024-05-15 08:40:33.681979] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.747 [2024-05-15 08:40:33.691358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.747 [2024-05-15 08:40:33.691655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.747 [2024-05-15 08:40:33.691902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.747 [2024-05-15 08:40:33.691934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.747 [2024-05-15 08:40:33.691956] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.747 [2024-05-15 08:40:33.692556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.747 [2024-05-15 08:40:33.693152] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.747 [2024-05-15 08:40:33.693160] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.747 [2024-05-15 08:40:33.693173] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.747 [2024-05-15 08:40:33.695891] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.747 [2024-05-15 08:40:33.704236] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.747 [2024-05-15 08:40:33.704587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.747 [2024-05-15 08:40:33.704739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.747 [2024-05-15 08:40:33.704750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.747 [2024-05-15 08:40:33.704756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.747 [2024-05-15 08:40:33.704930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.747 [2024-05-15 08:40:33.705104] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.747 [2024-05-15 08:40:33.705112] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.748 [2024-05-15 08:40:33.705118] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.748 [2024-05-15 08:40:33.707847] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.748 [2024-05-15 08:40:33.717120] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.748 [2024-05-15 08:40:33.717471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.748 [2024-05-15 08:40:33.717553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.748 [2024-05-15 08:40:33.717563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.748 [2024-05-15 08:40:33.717570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.748 [2024-05-15 08:40:33.717744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.748 [2024-05-15 08:40:33.717918] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.748 [2024-05-15 08:40:33.717926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.748 [2024-05-15 08:40:33.717933] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.748 [2024-05-15 08:40:33.720651] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.748 [2024-05-15 08:40:33.730071] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.748 [2024-05-15 08:40:33.730448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.748 [2024-05-15 08:40:33.730599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.748 [2024-05-15 08:40:33.730609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.748 [2024-05-15 08:40:33.730616] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.748 [2024-05-15 08:40:33.730790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.748 [2024-05-15 08:40:33.730964] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.748 [2024-05-15 08:40:33.730973] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.748 [2024-05-15 08:40:33.730979] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.748 [2024-05-15 08:40:33.733769] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.748 [2024-05-15 08:40:33.743029] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.748 [2024-05-15 08:40:33.743382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.748 [2024-05-15 08:40:33.743469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.748 [2024-05-15 08:40:33.743479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.748 [2024-05-15 08:40:33.743487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.748 [2024-05-15 08:40:33.743661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.748 [2024-05-15 08:40:33.743835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.748 [2024-05-15 08:40:33.743844] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.748 [2024-05-15 08:40:33.743850] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.748 [2024-05-15 08:40:33.746667] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.748 [2024-05-15 08:40:33.756045] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.748 [2024-05-15 08:40:33.756336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.748 [2024-05-15 08:40:33.756535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.748 [2024-05-15 08:40:33.756545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:46.748 [2024-05-15 08:40:33.756552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:46.748 [2024-05-15 08:40:33.756726] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:46.748 [2024-05-15 08:40:33.756900] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.748 [2024-05-15 08:40:33.756908] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.748 [2024-05-15 08:40:33.756914] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.748 [2024-05-15 08:40:33.759700] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.009 [2024-05-15 08:40:33.769152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.009 [2024-05-15 08:40:33.769544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.009 [2024-05-15 08:40:33.769675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.009 [2024-05-15 08:40:33.769685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.009 [2024-05-15 08:40:33.769693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.009 [2024-05-15 08:40:33.769872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.009 [2024-05-15 08:40:33.770051] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.009 [2024-05-15 08:40:33.770060] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.009 [2024-05-15 08:40:33.770066] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.009 [2024-05-15 08:40:33.772868] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.009 [2024-05-15 08:40:33.782124] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.009 [2024-05-15 08:40:33.782556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.009 [2024-05-15 08:40:33.782708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.009 [2024-05-15 08:40:33.782719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.009 [2024-05-15 08:40:33.782726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.009 [2024-05-15 08:40:33.782900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.009 [2024-05-15 08:40:33.783075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.009 [2024-05-15 08:40:33.783083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.009 [2024-05-15 08:40:33.783089] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.009 [2024-05-15 08:40:33.785806] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.009 [2024-05-15 08:40:33.795078] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.010 [2024-05-15 08:40:33.795502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.010 [2024-05-15 08:40:33.795595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.010 [2024-05-15 08:40:33.795605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.010 [2024-05-15 08:40:33.795612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.010 [2024-05-15 08:40:33.795786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.010 [2024-05-15 08:40:33.795960] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.010 [2024-05-15 08:40:33.795969] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.010 [2024-05-15 08:40:33.795975] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.010 [2024-05-15 08:40:33.798699] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.010 [2024-05-15 08:40:33.807973] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.010 [2024-05-15 08:40:33.808279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.010 [2024-05-15 08:40:33.808425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.010 [2024-05-15 08:40:33.808435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.010 [2024-05-15 08:40:33.808442] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.010 [2024-05-15 08:40:33.808617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.010 [2024-05-15 08:40:33.808791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.010 [2024-05-15 08:40:33.808799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.010 [2024-05-15 08:40:33.808806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.010 [2024-05-15 08:40:33.811519] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.010 [2024-05-15 08:40:33.821116] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.010 [2024-05-15 08:40:33.821471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.010 [2024-05-15 08:40:33.821582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.010 [2024-05-15 08:40:33.821592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.010 [2024-05-15 08:40:33.821599] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.010 [2024-05-15 08:40:33.821773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.010 [2024-05-15 08:40:33.821947] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.010 [2024-05-15 08:40:33.821955] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.010 [2024-05-15 08:40:33.821963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.010 [2024-05-15 08:40:33.824752] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.010 [2024-05-15 08:40:33.834012] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.010 [2024-05-15 08:40:33.834430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.010 [2024-05-15 08:40:33.834681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.010 [2024-05-15 08:40:33.834712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.010 [2024-05-15 08:40:33.834733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.010 [2024-05-15 08:40:33.835144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.010 [2024-05-15 08:40:33.835323] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.010 [2024-05-15 08:40:33.835332] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.010 [2024-05-15 08:40:33.835338] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.010 [2024-05-15 08:40:33.838053] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.010 [2024-05-15 08:40:33.846851] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.010 [2024-05-15 08:40:33.847307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.010 [2024-05-15 08:40:33.847499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.010 [2024-05-15 08:40:33.847530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.010 [2024-05-15 08:40:33.847552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.010 [2024-05-15 08:40:33.848138] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.010 [2024-05-15 08:40:33.848497] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.010 [2024-05-15 08:40:33.848506] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.010 [2024-05-15 08:40:33.848512] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.010 [2024-05-15 08:40:33.851244] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.010 [2024-05-15 08:40:33.860074] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.010 [2024-05-15 08:40:33.860493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.010 [2024-05-15 08:40:33.860716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.010 [2024-05-15 08:40:33.860730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.010 [2024-05-15 08:40:33.860737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.010 [2024-05-15 08:40:33.860917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.010 [2024-05-15 08:40:33.861097] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.010 [2024-05-15 08:40:33.861105] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.010 [2024-05-15 08:40:33.861111] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.010 [2024-05-15 08:40:33.863941] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.010 [2024-05-15 08:40:33.873200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.010 [2024-05-15 08:40:33.873605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.010 [2024-05-15 08:40:33.873833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.010 [2024-05-15 08:40:33.873843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.010 [2024-05-15 08:40:33.873850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.010 [2024-05-15 08:40:33.874023] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.010 [2024-05-15 08:40:33.874205] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.010 [2024-05-15 08:40:33.874215] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.010 [2024-05-15 08:40:33.874221] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.010 [2024-05-15 08:40:33.876929] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.010 [2024-05-15 08:40:33.886176] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.010 [2024-05-15 08:40:33.886592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.011 [2024-05-15 08:40:33.886802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.011 [2024-05-15 08:40:33.886812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.011 [2024-05-15 08:40:33.886819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.011 [2024-05-15 08:40:33.886992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.011 [2024-05-15 08:40:33.887173] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.011 [2024-05-15 08:40:33.887183] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.011 [2024-05-15 08:40:33.887189] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.011 [2024-05-15 08:40:33.889939] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.011 [2024-05-15 08:40:33.899020] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.011 [2024-05-15 08:40:33.899451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.011 [2024-05-15 08:40:33.899689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.011 [2024-05-15 08:40:33.899699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.011 [2024-05-15 08:40:33.899709] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.011 [2024-05-15 08:40:33.899883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.011 [2024-05-15 08:40:33.900057] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.011 [2024-05-15 08:40:33.900065] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.011 [2024-05-15 08:40:33.900071] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.011 [2024-05-15 08:40:33.902843] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.011 [2024-05-15 08:40:33.911982] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.011 [2024-05-15 08:40:33.912352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.011 [2024-05-15 08:40:33.912499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.011 [2024-05-15 08:40:33.912509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.011 [2024-05-15 08:40:33.912516] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.011 [2024-05-15 08:40:33.912689] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.011 [2024-05-15 08:40:33.912862] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.011 [2024-05-15 08:40:33.912870] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.011 [2024-05-15 08:40:33.912876] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.011 [2024-05-15 08:40:33.915593] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.011 [2024-05-15 08:40:33.924841] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.011 [2024-05-15 08:40:33.925284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.011 [2024-05-15 08:40:33.925506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.011 [2024-05-15 08:40:33.925537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.011 [2024-05-15 08:40:33.925559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.011 [2024-05-15 08:40:33.926145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.011 [2024-05-15 08:40:33.926406] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.011 [2024-05-15 08:40:33.926415] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.011 [2024-05-15 08:40:33.926421] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.011 [2024-05-15 08:40:33.929130] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.011 [2024-05-15 08:40:33.937743] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.011 [2024-05-15 08:40:33.938185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.011 [2024-05-15 08:40:33.938382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.011 [2024-05-15 08:40:33.938392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.011 [2024-05-15 08:40:33.938398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.011 [2024-05-15 08:40:33.938565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.011 [2024-05-15 08:40:33.938729] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.011 [2024-05-15 08:40:33.938737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.011 [2024-05-15 08:40:33.938742] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.011 [2024-05-15 08:40:33.941446] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.011 [2024-05-15 08:40:33.950679] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.011 [2024-05-15 08:40:33.951118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.011 [2024-05-15 08:40:33.951341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.011 [2024-05-15 08:40:33.951352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.011 [2024-05-15 08:40:33.951359] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.011 [2024-05-15 08:40:33.951536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.011 [2024-05-15 08:40:33.951701] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.011 [2024-05-15 08:40:33.951709] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.011 [2024-05-15 08:40:33.951714] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.011 [2024-05-15 08:40:33.954415] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.011 [2024-05-15 08:40:33.963507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.011 [2024-05-15 08:40:33.963958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.011 [2024-05-15 08:40:33.964195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.011 [2024-05-15 08:40:33.964233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.011 [2024-05-15 08:40:33.964255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.011 [2024-05-15 08:40:33.964646] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.011 [2024-05-15 08:40:33.964810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.011 [2024-05-15 08:40:33.964818] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.011 [2024-05-15 08:40:33.964824] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.011 [2024-05-15 08:40:33.967443] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.011 [2024-05-15 08:40:33.976429] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.011 [2024-05-15 08:40:33.976854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.011 [2024-05-15 08:40:33.977076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.011 [2024-05-15 08:40:33.977086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.011 [2024-05-15 08:40:33.977092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.011 [2024-05-15 08:40:33.977284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.012 [2024-05-15 08:40:33.977463] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.012 [2024-05-15 08:40:33.977471] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.012 [2024-05-15 08:40:33.977478] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.012 [2024-05-15 08:40:33.980189] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.012 [2024-05-15 08:40:33.989342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.012 [2024-05-15 08:40:33.989783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.012 [2024-05-15 08:40:33.989948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.012 [2024-05-15 08:40:33.989958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.012 [2024-05-15 08:40:33.989965] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.012 [2024-05-15 08:40:33.990139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.012 [2024-05-15 08:40:33.990321] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.012 [2024-05-15 08:40:33.990330] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.012 [2024-05-15 08:40:33.990337] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.012 [2024-05-15 08:40:33.993041] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.012 [2024-05-15 08:40:34.002305] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.012 [2024-05-15 08:40:34.002771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.012 [2024-05-15 08:40:34.003040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.012 [2024-05-15 08:40:34.003070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.012 [2024-05-15 08:40:34.003092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.012 [2024-05-15 08:40:34.003698] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.012 [2024-05-15 08:40:34.003971] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.012 [2024-05-15 08:40:34.003979] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.012 [2024-05-15 08:40:34.003985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.012 [2024-05-15 08:40:34.006697] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.012 [2024-05-15 08:40:34.015130] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.012 [2024-05-15 08:40:34.015571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.012 [2024-05-15 08:40:34.015838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.012 [2024-05-15 08:40:34.015870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.012 [2024-05-15 08:40:34.015892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.012 [2024-05-15 08:40:34.016111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.012 [2024-05-15 08:40:34.016303] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.012 [2024-05-15 08:40:34.016315] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.012 [2024-05-15 08:40:34.016321] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.012 [2024-05-15 08:40:34.019025] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.012 [2024-05-15 08:40:34.028140] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.012 [2024-05-15 08:40:34.028592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.012 [2024-05-15 08:40:34.028855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.012 [2024-05-15 08:40:34.028885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.012 [2024-05-15 08:40:34.028907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.012 [2024-05-15 08:40:34.029519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.012 [2024-05-15 08:40:34.029855] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.012 [2024-05-15 08:40:34.029863] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.012 [2024-05-15 08:40:34.029869] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.272 [2024-05-15 08:40:34.032762] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.272 [2024-05-15 08:40:34.041238] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.272 [2024-05-15 08:40:34.041693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.272 [2024-05-15 08:40:34.041904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.272 [2024-05-15 08:40:34.041914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.272 [2024-05-15 08:40:34.041921] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.272 [2024-05-15 08:40:34.042095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.272 [2024-05-15 08:40:34.042276] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.272 [2024-05-15 08:40:34.042285] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.272 [2024-05-15 08:40:34.042291] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.272 [2024-05-15 08:40:34.045003] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.273 [2024-05-15 08:40:34.054114] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.273 [2024-05-15 08:40:34.054541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.273 [2024-05-15 08:40:34.054724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.273 [2024-05-15 08:40:34.054756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.273 [2024-05-15 08:40:34.054778] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.273 [2024-05-15 08:40:34.055383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.273 [2024-05-15 08:40:34.055660] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.273 [2024-05-15 08:40:34.055668] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.273 [2024-05-15 08:40:34.055677] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.273 [2024-05-15 08:40:34.058450] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.273 [2024-05-15 08:40:34.066966] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.273 [2024-05-15 08:40:34.067422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.273 [2024-05-15 08:40:34.067614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.273 [2024-05-15 08:40:34.067644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.273 [2024-05-15 08:40:34.067667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.273 [2024-05-15 08:40:34.067977] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.273 [2024-05-15 08:40:34.068151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.273 [2024-05-15 08:40:34.068159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.273 [2024-05-15 08:40:34.068170] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.273 [2024-05-15 08:40:34.070883] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.273 [2024-05-15 08:40:34.079824] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.273 [2024-05-15 08:40:34.080268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.273 [2024-05-15 08:40:34.080486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.273 [2024-05-15 08:40:34.080496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.273 [2024-05-15 08:40:34.080502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.273 [2024-05-15 08:40:34.080667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.273 [2024-05-15 08:40:34.080835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.273 [2024-05-15 08:40:34.080843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.273 [2024-05-15 08:40:34.080849] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.273 [2024-05-15 08:40:34.083627] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.273 [2024-05-15 08:40:34.092816] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.273 [2024-05-15 08:40:34.093249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.273 [2024-05-15 08:40:34.093397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.273 [2024-05-15 08:40:34.093438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.273 [2024-05-15 08:40:34.093460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.273 [2024-05-15 08:40:34.093998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.273 [2024-05-15 08:40:34.094177] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.273 [2024-05-15 08:40:34.094186] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.273 [2024-05-15 08:40:34.094192] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.273 [2024-05-15 08:40:34.096967] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.273 [2024-05-15 08:40:34.105705] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.273 [2024-05-15 08:40:34.106075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.273 [2024-05-15 08:40:34.106296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.273 [2024-05-15 08:40:34.106308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.273 [2024-05-15 08:40:34.106315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.273 [2024-05-15 08:40:34.106495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.273 [2024-05-15 08:40:34.106674] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.273 [2024-05-15 08:40:34.106683] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.273 [2024-05-15 08:40:34.106689] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.273 [2024-05-15 08:40:34.109562] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.273 [2024-05-15 08:40:34.118827] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.273 [2024-05-15 08:40:34.119211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.273 [2024-05-15 08:40:34.119403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.273 [2024-05-15 08:40:34.119433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.273 [2024-05-15 08:40:34.119455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.273 [2024-05-15 08:40:34.119805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.273 [2024-05-15 08:40:34.119985] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.273 [2024-05-15 08:40:34.119993] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.273 [2024-05-15 08:40:34.119999] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.273 [2024-05-15 08:40:34.122875] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.273 [2024-05-15 08:40:34.131964] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.273 [2024-05-15 08:40:34.132413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.273 [2024-05-15 08:40:34.132584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.273 [2024-05-15 08:40:34.132594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.273 [2024-05-15 08:40:34.132601] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.273 [2024-05-15 08:40:34.132781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.273 [2024-05-15 08:40:34.132960] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.273 [2024-05-15 08:40:34.132968] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.273 [2024-05-15 08:40:34.132975] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.273 [2024-05-15 08:40:34.135803] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.273 [2024-05-15 08:40:34.144956] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.273 [2024-05-15 08:40:34.145378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.273 [2024-05-15 08:40:34.145595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.273 [2024-05-15 08:40:34.145605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.273 [2024-05-15 08:40:34.145612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.273 [2024-05-15 08:40:34.145777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.273 [2024-05-15 08:40:34.145940] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.273 [2024-05-15 08:40:34.145948] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.273 [2024-05-15 08:40:34.145954] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.273 [2024-05-15 08:40:34.148679] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.273 [2024-05-15 08:40:34.157924] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.273 [2024-05-15 08:40:34.158285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.273 [2024-05-15 08:40:34.158426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.273 [2024-05-15 08:40:34.158436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.273 [2024-05-15 08:40:34.158443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.273 [2024-05-15 08:40:34.158617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.273 [2024-05-15 08:40:34.158791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.273 [2024-05-15 08:40:34.158799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.273 [2024-05-15 08:40:34.158805] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.273 [2024-05-15 08:40:34.161518] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.274 [2024-05-15 08:40:34.170761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.274 [2024-05-15 08:40:34.171151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.274 [2024-05-15 08:40:34.171408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.274 [2024-05-15 08:40:34.171438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.274 [2024-05-15 08:40:34.171460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.274 [2024-05-15 08:40:34.172045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.274 [2024-05-15 08:40:34.172352] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.274 [2024-05-15 08:40:34.172361] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.274 [2024-05-15 08:40:34.172367] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.274 [2024-05-15 08:40:34.175074] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.274 [2024-05-15 08:40:34.183700] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.274 [2024-05-15 08:40:34.184017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.274 [2024-05-15 08:40:34.184140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.274 [2024-05-15 08:40:34.184150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.274 [2024-05-15 08:40:34.184157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.274 [2024-05-15 08:40:34.184353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.274 [2024-05-15 08:40:34.184528] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.274 [2024-05-15 08:40:34.184536] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.274 [2024-05-15 08:40:34.184542] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.274 [2024-05-15 08:40:34.187253] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.274 [2024-05-15 08:40:34.196535] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.274 [2024-05-15 08:40:34.196991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.274 [2024-05-15 08:40:34.197272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.274 [2024-05-15 08:40:34.197310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.274 [2024-05-15 08:40:34.197334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.274 [2024-05-15 08:40:34.197923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.274 [2024-05-15 08:40:34.198271] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.274 [2024-05-15 08:40:34.198280] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.274 [2024-05-15 08:40:34.198286] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.274 [2024-05-15 08:40:34.201064] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.274 [2024-05-15 08:40:34.209507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.274 [2024-05-15 08:40:34.209931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.274 [2024-05-15 08:40:34.210116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.274 [2024-05-15 08:40:34.210147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.274 [2024-05-15 08:40:34.210182] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.274 [2024-05-15 08:40:34.210771] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.274 [2024-05-15 08:40:34.211121] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.274 [2024-05-15 08:40:34.211129] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.274 [2024-05-15 08:40:34.211135] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.274 [2024-05-15 08:40:34.213855] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.274 [2024-05-15 08:40:34.222326] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.274 [2024-05-15 08:40:34.222719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.274 [2024-05-15 08:40:34.222936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.274 [2024-05-15 08:40:34.222973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.274 [2024-05-15 08:40:34.222995] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.274 [2024-05-15 08:40:34.223409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.274 [2024-05-15 08:40:34.223575] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.274 [2024-05-15 08:40:34.223582] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.274 [2024-05-15 08:40:34.223588] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.274 [2024-05-15 08:40:34.226213] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.274 [2024-05-15 08:40:34.235201] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.274 [2024-05-15 08:40:34.235595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.274 [2024-05-15 08:40:34.235742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.274 [2024-05-15 08:40:34.235752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.274 [2024-05-15 08:40:34.235758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.274 [2024-05-15 08:40:34.235923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.274 [2024-05-15 08:40:34.236087] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.274 [2024-05-15 08:40:34.236095] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.274 [2024-05-15 08:40:34.236100] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.274 [2024-05-15 08:40:34.238831] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.274 [2024-05-15 08:40:34.248073] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.274 [2024-05-15 08:40:34.248516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.274 [2024-05-15 08:40:34.248767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.274 [2024-05-15 08:40:34.248777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.274 [2024-05-15 08:40:34.248809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.274 [2024-05-15 08:40:34.249379] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.274 [2024-05-15 08:40:34.249554] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.274 [2024-05-15 08:40:34.249562] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.274 [2024-05-15 08:40:34.249568] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.274 [2024-05-15 08:40:34.252275] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.274 [2024-05-15 08:40:34.261063] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.274 [2024-05-15 08:40:34.261493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.274 [2024-05-15 08:40:34.261741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.274 [2024-05-15 08:40:34.261772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.274 [2024-05-15 08:40:34.261800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.274 [2024-05-15 08:40:34.262405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.274 [2024-05-15 08:40:34.262597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.274 [2024-05-15 08:40:34.262604] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.274 [2024-05-15 08:40:34.262611] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.274 [2024-05-15 08:40:34.265321] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.274 [2024-05-15 08:40:34.274105] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.274 [2024-05-15 08:40:34.274557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.274 [2024-05-15 08:40:34.274823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.274 [2024-05-15 08:40:34.274854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.274 [2024-05-15 08:40:34.274875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.274 [2024-05-15 08:40:34.275483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.274 [2024-05-15 08:40:34.275684] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.274 [2024-05-15 08:40:34.275692] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.274 [2024-05-15 08:40:34.275699] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.274 [2024-05-15 08:40:34.278405] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.274 [2024-05-15 08:40:34.287067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.275 [2024-05-15 08:40:34.287491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.275 [2024-05-15 08:40:34.287668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.275 [2024-05-15 08:40:34.287699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.275 [2024-05-15 08:40:34.287724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.275 [2024-05-15 08:40:34.288340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.275 [2024-05-15 08:40:34.288607] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.275 [2024-05-15 08:40:34.288615] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.275 [2024-05-15 08:40:34.288621] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.275 [2024-05-15 08:40:34.291463] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.537 [2024-05-15 08:40:34.300093] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.537 [2024-05-15 08:40:34.300527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.537 [2024-05-15 08:40:34.300675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.537 [2024-05-15 08:40:34.300686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.537 [2024-05-15 08:40:34.300693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.537 [2024-05-15 08:40:34.300877] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.537 [2024-05-15 08:40:34.301056] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.537 [2024-05-15 08:40:34.301064] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.537 [2024-05-15 08:40:34.301071] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.537 [2024-05-15 08:40:34.303896] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.537 [2024-05-15 08:40:34.312927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.537 [2024-05-15 08:40:34.313345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.537 [2024-05-15 08:40:34.313565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.537 [2024-05-15 08:40:34.313574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.537 [2024-05-15 08:40:34.313581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.537 [2024-05-15 08:40:34.313744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.537 [2024-05-15 08:40:34.313908] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.537 [2024-05-15 08:40:34.313916] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.537 [2024-05-15 08:40:34.313921] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.537 [2024-05-15 08:40:34.316635] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.537 [2024-05-15 08:40:34.325874] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.537 [2024-05-15 08:40:34.326300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.537 [2024-05-15 08:40:34.326545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.537 [2024-05-15 08:40:34.326555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.537 [2024-05-15 08:40:34.326562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.537 [2024-05-15 08:40:34.326735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.537 [2024-05-15 08:40:34.326909] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.537 [2024-05-15 08:40:34.326917] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.537 [2024-05-15 08:40:34.326923] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.537 [2024-05-15 08:40:34.329632] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.537 [2024-05-15 08:40:34.338716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.537 [2024-05-15 08:40:34.339133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.537 [2024-05-15 08:40:34.339354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.537 [2024-05-15 08:40:34.339365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.537 [2024-05-15 08:40:34.339372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.537 [2024-05-15 08:40:34.339546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.537 [2024-05-15 08:40:34.339723] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.537 [2024-05-15 08:40:34.339731] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.537 [2024-05-15 08:40:34.339737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.537 [2024-05-15 08:40:34.342447] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.537 [2024-05-15 08:40:34.351537] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.537 [2024-05-15 08:40:34.351931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.537 [2024-05-15 08:40:34.352079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.537 [2024-05-15 08:40:34.352089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.537 [2024-05-15 08:40:34.352096] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.537 [2024-05-15 08:40:34.352278] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.537 [2024-05-15 08:40:34.352453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.537 [2024-05-15 08:40:34.352461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.537 [2024-05-15 08:40:34.352467] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.537 [2024-05-15 08:40:34.355173] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.537 [2024-05-15 08:40:34.364392] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.537 [2024-05-15 08:40:34.364820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.537 [2024-05-15 08:40:34.365037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.537 [2024-05-15 08:40:34.365047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.537 [2024-05-15 08:40:34.365054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.537 [2024-05-15 08:40:34.365241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.537 [2024-05-15 08:40:34.365422] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.537 [2024-05-15 08:40:34.365430] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.537 [2024-05-15 08:40:34.365436] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.537 [2024-05-15 08:40:34.368297] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.537 [2024-05-15 08:40:34.377557] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.537 [2024-05-15 08:40:34.377976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.537 [2024-05-15 08:40:34.378135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.537 [2024-05-15 08:40:34.378145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.537 [2024-05-15 08:40:34.378152] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.537 [2024-05-15 08:40:34.378337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.537 [2024-05-15 08:40:34.378523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.537 [2024-05-15 08:40:34.378534] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.537 [2024-05-15 08:40:34.378540] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.537 [2024-05-15 08:40:34.381322] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.537 [2024-05-15 08:40:34.390435] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.537 [2024-05-15 08:40:34.390832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.537 [2024-05-15 08:40:34.391028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.538 [2024-05-15 08:40:34.391037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.538 [2024-05-15 08:40:34.391044] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.538 [2024-05-15 08:40:34.391226] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.538 [2024-05-15 08:40:34.391401] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.538 [2024-05-15 08:40:34.391409] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.538 [2024-05-15 08:40:34.391416] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.538 [2024-05-15 08:40:34.394124] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.538 [2024-05-15 08:40:34.403359] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.538 [2024-05-15 08:40:34.403779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.538 [2024-05-15 08:40:34.404001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.538 [2024-05-15 08:40:34.404010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.538 [2024-05-15 08:40:34.404017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.538 [2024-05-15 08:40:34.404199] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.538 [2024-05-15 08:40:34.404374] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.538 [2024-05-15 08:40:34.404382] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.538 [2024-05-15 08:40:34.404388] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.538 [2024-05-15 08:40:34.407093] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.538 [2024-05-15 08:40:34.416176] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.538 [2024-05-15 08:40:34.416591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.538 [2024-05-15 08:40:34.416737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.538 [2024-05-15 08:40:34.416747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.538 [2024-05-15 08:40:34.416753] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.538 [2024-05-15 08:40:34.416926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.538 [2024-05-15 08:40:34.417100] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.538 [2024-05-15 08:40:34.417108] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.538 [2024-05-15 08:40:34.417117] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.538 [2024-05-15 08:40:34.419833] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.538 [2024-05-15 08:40:34.429061] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.538 [2024-05-15 08:40:34.429407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.538 [2024-05-15 08:40:34.429626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.538 [2024-05-15 08:40:34.429636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.538 [2024-05-15 08:40:34.429643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.538 [2024-05-15 08:40:34.429816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.538 [2024-05-15 08:40:34.429990] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.538 [2024-05-15 08:40:34.429998] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.538 [2024-05-15 08:40:34.430004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.538 [2024-05-15 08:40:34.432715] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.538 [2024-05-15 08:40:34.441951] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.538 [2024-05-15 08:40:34.442369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.538 [2024-05-15 08:40:34.442512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.538 [2024-05-15 08:40:34.442522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.538 [2024-05-15 08:40:34.442529] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.538 [2024-05-15 08:40:34.442703] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.538 [2024-05-15 08:40:34.442876] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.538 [2024-05-15 08:40:34.442884] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.538 [2024-05-15 08:40:34.442890] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.538 [2024-05-15 08:40:34.445602] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.538 [2024-05-15 08:40:34.454799] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.538 [2024-05-15 08:40:34.455123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.538 [2024-05-15 08:40:34.455349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.538 [2024-05-15 08:40:34.455361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.538 [2024-05-15 08:40:34.455368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.538 [2024-05-15 08:40:34.455542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.538 [2024-05-15 08:40:34.455716] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.538 [2024-05-15 08:40:34.455724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.538 [2024-05-15 08:40:34.455730] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.538 [2024-05-15 08:40:34.458483] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.538 [2024-05-15 08:40:34.467750] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.538 [2024-05-15 08:40:34.468170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.538 [2024-05-15 08:40:34.468332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.538 [2024-05-15 08:40:34.468342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.538 [2024-05-15 08:40:34.468348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.538 [2024-05-15 08:40:34.468522] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.538 [2024-05-15 08:40:34.468696] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.538 [2024-05-15 08:40:34.468704] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.538 [2024-05-15 08:40:34.468711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.538 [2024-05-15 08:40:34.471421] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.538 [2024-05-15 08:40:34.480665] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.538 [2024-05-15 08:40:34.481082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.538 [2024-05-15 08:40:34.481254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.538 [2024-05-15 08:40:34.481266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.538 [2024-05-15 08:40:34.481273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.538 [2024-05-15 08:40:34.481447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.538 [2024-05-15 08:40:34.481620] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.538 [2024-05-15 08:40:34.481628] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.538 [2024-05-15 08:40:34.481634] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.538 [2024-05-15 08:40:34.484401] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.538 [2024-05-15 08:40:34.493482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.538 [2024-05-15 08:40:34.493878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.538 [2024-05-15 08:40:34.494023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.539 [2024-05-15 08:40:34.494033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.539 [2024-05-15 08:40:34.494039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.539 [2024-05-15 08:40:34.494229] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.539 [2024-05-15 08:40:34.494404] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.539 [2024-05-15 08:40:34.494412] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.539 [2024-05-15 08:40:34.494419] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.539 [2024-05-15 08:40:34.497120] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.539 [2024-05-15 08:40:34.506361] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.539 [2024-05-15 08:40:34.506782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.539 [2024-05-15 08:40:34.507006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.539 [2024-05-15 08:40:34.507016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.539 [2024-05-15 08:40:34.507023] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.539 [2024-05-15 08:40:34.507211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.539 [2024-05-15 08:40:34.507386] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.539 [2024-05-15 08:40:34.507395] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.539 [2024-05-15 08:40:34.507401] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.539 [2024-05-15 08:40:34.510106] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.539 [2024-05-15 08:40:34.519427] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.539 [2024-05-15 08:40:34.519801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.539 [2024-05-15 08:40:34.519892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.539 [2024-05-15 08:40:34.519903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.539 [2024-05-15 08:40:34.519910] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.539 [2024-05-15 08:40:34.520088] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.539 [2024-05-15 08:40:34.520273] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.539 [2024-05-15 08:40:34.520282] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.539 [2024-05-15 08:40:34.520288] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.539 [2024-05-15 08:40:34.523115] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.539 [2024-05-15 08:40:34.532577] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.539 [2024-05-15 08:40:34.533007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.539 [2024-05-15 08:40:34.533236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.539 [2024-05-15 08:40:34.533247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.539 [2024-05-15 08:40:34.533254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.539 [2024-05-15 08:40:34.533433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.539 [2024-05-15 08:40:34.533612] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.539 [2024-05-15 08:40:34.533620] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.539 [2024-05-15 08:40:34.533627] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.539 [2024-05-15 08:40:34.536458] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.539 [2024-05-15 08:40:34.545443] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.539 [2024-05-15 08:40:34.545873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.539 [2024-05-15 08:40:34.546056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.539 [2024-05-15 08:40:34.546087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.539 [2024-05-15 08:40:34.546109] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.539 [2024-05-15 08:40:34.546708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.539 [2024-05-15 08:40:34.547143] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.539 [2024-05-15 08:40:34.547155] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.539 [2024-05-15 08:40:34.547177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.539 [2024-05-15 08:40:34.551298] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.801 [2024-05-15 08:40:34.559228] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.801 [2024-05-15 08:40:34.559603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.801 [2024-05-15 08:40:34.559815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.801 [2024-05-15 08:40:34.559845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.801 [2024-05-15 08:40:34.559868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.801 [2024-05-15 08:40:34.560285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.801 [2024-05-15 08:40:34.560460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.801 [2024-05-15 08:40:34.560468] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.801 [2024-05-15 08:40:34.560475] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.801 [2024-05-15 08:40:34.563278] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.801 [2024-05-15 08:40:34.572243] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.801 [2024-05-15 08:40:34.572665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.801 [2024-05-15 08:40:34.572931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.801 [2024-05-15 08:40:34.572963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.801 [2024-05-15 08:40:34.572985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.801 [2024-05-15 08:40:34.573280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.801 [2024-05-15 08:40:34.573454] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.801 [2024-05-15 08:40:34.573462] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.801 [2024-05-15 08:40:34.573468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.801 [2024-05-15 08:40:34.576181] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.801 [2024-05-15 08:40:34.585218] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.801 [2024-05-15 08:40:34.585593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.801 [2024-05-15 08:40:34.585756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.801 [2024-05-15 08:40:34.585800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.801 [2024-05-15 08:40:34.585822] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.801 [2024-05-15 08:40:34.586343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.801 [2024-05-15 08:40:34.586513] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.801 [2024-05-15 08:40:34.586521] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.801 [2024-05-15 08:40:34.586526] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.801 [2024-05-15 08:40:34.589228] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.801 [2024-05-15 08:40:34.598146] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.801 [2024-05-15 08:40:34.598526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.801 [2024-05-15 08:40:34.598751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.801 [2024-05-15 08:40:34.598761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.801 [2024-05-15 08:40:34.598767] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.801 [2024-05-15 08:40:34.598931] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.801 [2024-05-15 08:40:34.599095] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.801 [2024-05-15 08:40:34.599103] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.801 [2024-05-15 08:40:34.599109] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.801 [2024-05-15 08:40:34.601836] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.801 [2024-05-15 08:40:34.611092] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.801 [2024-05-15 08:40:34.611513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.801 [2024-05-15 08:40:34.611733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.801 [2024-05-15 08:40:34.611743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.801 [2024-05-15 08:40:34.611750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.801 [2024-05-15 08:40:34.611923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.801 [2024-05-15 08:40:34.612096] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.801 [2024-05-15 08:40:34.612104] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.801 [2024-05-15 08:40:34.612110] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.801 [2024-05-15 08:40:34.614869] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.801 [2024-05-15 08:40:34.624189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.801 [2024-05-15 08:40:34.624597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.801 [2024-05-15 08:40:34.624794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.801 [2024-05-15 08:40:34.624805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.801 [2024-05-15 08:40:34.624814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.801 [2024-05-15 08:40:34.624993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.801 [2024-05-15 08:40:34.625177] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.801 [2024-05-15 08:40:34.625185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.801 [2024-05-15 08:40:34.625191] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.801 [2024-05-15 08:40:34.627992] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.801 [2024-05-15 08:40:34.637232] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.801 [2024-05-15 08:40:34.637633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.801 [2024-05-15 08:40:34.637855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.801 [2024-05-15 08:40:34.637865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.801 [2024-05-15 08:40:34.637872] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.801 [2024-05-15 08:40:34.638046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.801 [2024-05-15 08:40:34.638245] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.801 [2024-05-15 08:40:34.638255] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.801 [2024-05-15 08:40:34.638261] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.801 [2024-05-15 08:40:34.641081] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.801 [2024-05-15 08:40:34.650116] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.801 [2024-05-15 08:40:34.650530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.801 [2024-05-15 08:40:34.650747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.801 [2024-05-15 08:40:34.650757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.801 [2024-05-15 08:40:34.650764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.801 [2024-05-15 08:40:34.650937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.801 [2024-05-15 08:40:34.651111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.801 [2024-05-15 08:40:34.651119] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.801 [2024-05-15 08:40:34.651125] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.801 [2024-05-15 08:40:34.653835] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.801 [2024-05-15 08:40:34.662973] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.801 [2024-05-15 08:40:34.663386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.801 [2024-05-15 08:40:34.663609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.802 [2024-05-15 08:40:34.663619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.802 [2024-05-15 08:40:34.663626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.802 [2024-05-15 08:40:34.663803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.802 [2024-05-15 08:40:34.663976] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.802 [2024-05-15 08:40:34.663984] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.802 [2024-05-15 08:40:34.663990] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.802 [2024-05-15 08:40:34.666706] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.802 [2024-05-15 08:40:34.675878] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.802 [2024-05-15 08:40:34.676294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.802 [2024-05-15 08:40:34.676489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.802 [2024-05-15 08:40:34.676520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.802 [2024-05-15 08:40:34.676542] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.802 [2024-05-15 08:40:34.677125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.802 [2024-05-15 08:40:34.677305] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.802 [2024-05-15 08:40:34.677313] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.802 [2024-05-15 08:40:34.677319] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.802 [2024-05-15 08:40:34.680025] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.802 [2024-05-15 08:40:34.688726] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.802 [2024-05-15 08:40:34.689145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.802 [2024-05-15 08:40:34.689296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.802 [2024-05-15 08:40:34.689307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.802 [2024-05-15 08:40:34.689314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.802 [2024-05-15 08:40:34.689488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.802 [2024-05-15 08:40:34.689661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.802 [2024-05-15 08:40:34.689669] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.802 [2024-05-15 08:40:34.689675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.802 [2024-05-15 08:40:34.692384] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.802 [2024-05-15 08:40:34.701588] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.802 [2024-05-15 08:40:34.702007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.802 [2024-05-15 08:40:34.702248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.802 [2024-05-15 08:40:34.702281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.802 [2024-05-15 08:40:34.702302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.802 [2024-05-15 08:40:34.702605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.802 [2024-05-15 08:40:34.702782] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.802 [2024-05-15 08:40:34.702790] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.802 [2024-05-15 08:40:34.702796] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.802 [2024-05-15 08:40:34.705501] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.802 [2024-05-15 08:40:34.714448] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.802 [2024-05-15 08:40:34.714843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.802 [2024-05-15 08:40:34.715061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.802 [2024-05-15 08:40:34.715070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.802 [2024-05-15 08:40:34.715077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.802 [2024-05-15 08:40:34.715267] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.802 [2024-05-15 08:40:34.715441] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.802 [2024-05-15 08:40:34.715449] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.802 [2024-05-15 08:40:34.715455] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.802 [2024-05-15 08:40:34.718195] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.802 [2024-05-15 08:40:34.727313] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.802 [2024-05-15 08:40:34.727704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.802 [2024-05-15 08:40:34.727899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.802 [2024-05-15 08:40:34.727909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.802 [2024-05-15 08:40:34.727916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.802 [2024-05-15 08:40:34.728089] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.802 [2024-05-15 08:40:34.728269] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.802 [2024-05-15 08:40:34.728277] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.802 [2024-05-15 08:40:34.728283] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.802 [2024-05-15 08:40:34.730990] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.802 [2024-05-15 08:40:34.740172] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.802 [2024-05-15 08:40:34.740588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.802 [2024-05-15 08:40:34.740785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.802 [2024-05-15 08:40:34.740795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.802 [2024-05-15 08:40:34.740802] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.802 [2024-05-15 08:40:34.740975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.802 [2024-05-15 08:40:34.741149] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.802 [2024-05-15 08:40:34.741160] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.802 [2024-05-15 08:40:34.741174] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.802 [2024-05-15 08:40:34.743881] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.802 [2024-05-15 08:40:34.753113] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.802 [2024-05-15 08:40:34.753532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.802 [2024-05-15 08:40:34.753772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.802 [2024-05-15 08:40:34.753803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.802 [2024-05-15 08:40:34.753824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.802 [2024-05-15 08:40:34.754425] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.802 [2024-05-15 08:40:34.754791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.802 [2024-05-15 08:40:34.754800] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.802 [2024-05-15 08:40:34.754806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.802 [2024-05-15 08:40:34.757520] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.802 [2024-05-15 08:40:34.765998] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.802 [2024-05-15 08:40:34.766379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.802 [2024-05-15 08:40:34.766578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.802 [2024-05-15 08:40:34.766589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.802 [2024-05-15 08:40:34.766596] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.802 [2024-05-15 08:40:34.766770] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.802 [2024-05-15 08:40:34.766944] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.802 [2024-05-15 08:40:34.766952] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.802 [2024-05-15 08:40:34.766958] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.802 [2024-05-15 08:40:34.769679] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.802 [2024-05-15 08:40:34.778932] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.802 [2024-05-15 08:40:34.779328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.802 [2024-05-15 08:40:34.779428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.803 [2024-05-15 08:40:34.779438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.803 [2024-05-15 08:40:34.779445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.803 [2024-05-15 08:40:34.779619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.803 [2024-05-15 08:40:34.779792] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.803 [2024-05-15 08:40:34.779800] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.803 [2024-05-15 08:40:34.779810] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.803 [2024-05-15 08:40:34.782520] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.803 [2024-05-15 08:40:34.791754] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.803 [2024-05-15 08:40:34.792153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.803 [2024-05-15 08:40:34.792425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.803 [2024-05-15 08:40:34.792456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.803 [2024-05-15 08:40:34.792478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.803 [2024-05-15 08:40:34.792749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.803 [2024-05-15 08:40:34.792913] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.803 [2024-05-15 08:40:34.792921] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.803 [2024-05-15 08:40:34.792927] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.803 [2024-05-15 08:40:34.795588] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.803 [2024-05-15 08:40:34.804666] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.803 [2024-05-15 08:40:34.805061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.803 [2024-05-15 08:40:34.805282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.803 [2024-05-15 08:40:34.805294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.803 [2024-05-15 08:40:34.805301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.803 [2024-05-15 08:40:34.805475] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.803 [2024-05-15 08:40:34.805648] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.803 [2024-05-15 08:40:34.805656] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.803 [2024-05-15 08:40:34.805662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.803 [2024-05-15 08:40:34.808395] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.803 [2024-05-15 08:40:34.817641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.803 [2024-05-15 08:40:34.818068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.803 [2024-05-15 08:40:34.818212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.803 [2024-05-15 08:40:34.818227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:47.803 [2024-05-15 08:40:34.818235] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:47.803 [2024-05-15 08:40:34.818415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:47.803 [2024-05-15 08:40:34.818596] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.803 [2024-05-15 08:40:34.818605] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.803 [2024-05-15 08:40:34.818611] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.803 [2024-05-15 08:40:34.821534] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.064 [2024-05-15 08:40:34.830754] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.064 [2024-05-15 08:40:34.831178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.064 [2024-05-15 08:40:34.831326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.064 [2024-05-15 08:40:34.831337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.064 [2024-05-15 08:40:34.831343] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.064 [2024-05-15 08:40:34.831517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.064 [2024-05-15 08:40:34.831692] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.064 [2024-05-15 08:40:34.831700] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.064 [2024-05-15 08:40:34.831707] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.064 [2024-05-15 08:40:34.834524] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.064 [2024-05-15 08:40:34.843633] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.064 [2024-05-15 08:40:34.844070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.064 [2024-05-15 08:40:34.844280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.064 [2024-05-15 08:40:34.844313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.064 [2024-05-15 08:40:34.844334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.064 [2024-05-15 08:40:34.844860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.064 [2024-05-15 08:40:34.845034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.064 [2024-05-15 08:40:34.845042] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.064 [2024-05-15 08:40:34.845049] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.064 [2024-05-15 08:40:34.847771] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.064 [2024-05-15 08:40:34.856569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.064 [2024-05-15 08:40:34.856983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.064 [2024-05-15 08:40:34.857188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.064 [2024-05-15 08:40:34.857199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.064 [2024-05-15 08:40:34.857206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.064 [2024-05-15 08:40:34.857380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.064 [2024-05-15 08:40:34.857554] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.064 [2024-05-15 08:40:34.857562] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.064 [2024-05-15 08:40:34.857569] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.064 [2024-05-15 08:40:34.860294] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.064 [2024-05-15 08:40:34.869420] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.064 [2024-05-15 08:40:34.869871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.064 [2024-05-15 08:40:34.870021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.064 [2024-05-15 08:40:34.870031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.064 [2024-05-15 08:40:34.870038] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.064 [2024-05-15 08:40:34.870222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.064 [2024-05-15 08:40:34.870404] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.064 [2024-05-15 08:40:34.870413] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.064 [2024-05-15 08:40:34.870419] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.064 [2024-05-15 08:40:34.873293] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.064 [2024-05-15 08:40:34.882582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.064 [2024-05-15 08:40:34.883002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.064 [2024-05-15 08:40:34.883223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.064 [2024-05-15 08:40:34.883235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.064 [2024-05-15 08:40:34.883242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.064 [2024-05-15 08:40:34.883421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.064 [2024-05-15 08:40:34.883600] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.064 [2024-05-15 08:40:34.883608] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.064 [2024-05-15 08:40:34.883614] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.064 [2024-05-15 08:40:34.886499] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.064 [2024-05-15 08:40:34.895677] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.064 [2024-05-15 08:40:34.896192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.064 [2024-05-15 08:40:34.896400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.064 [2024-05-15 08:40:34.896430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.064 [2024-05-15 08:40:34.896451] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.064 [2024-05-15 08:40:34.896838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.064 [2024-05-15 08:40:34.897012] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.064 [2024-05-15 08:40:34.897020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.064 [2024-05-15 08:40:34.897026] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.064 [2024-05-15 08:40:34.899751] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.064 [2024-05-15 08:40:34.908565] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.064 [2024-05-15 08:40:34.909024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.064 [2024-05-15 08:40:34.909242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.064 [2024-05-15 08:40:34.909278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.064 [2024-05-15 08:40:34.909300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.064 [2024-05-15 08:40:34.909887] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.064 [2024-05-15 08:40:34.910160] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.064 [2024-05-15 08:40:34.910175] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.064 [2024-05-15 08:40:34.910181] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.064 [2024-05-15 08:40:34.912953] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.064 [2024-05-15 08:40:34.921607] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.064 [2024-05-15 08:40:34.922035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.064 [2024-05-15 08:40:34.922280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.064 [2024-05-15 08:40:34.922292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.064 [2024-05-15 08:40:34.922299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.064 [2024-05-15 08:40:34.922473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.064 [2024-05-15 08:40:34.922647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.064 [2024-05-15 08:40:34.922656] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.064 [2024-05-15 08:40:34.922662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.065 [2024-05-15 08:40:34.925442] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.065 [2024-05-15 08:40:34.934641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.065 [2024-05-15 08:40:34.935088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.065 [2024-05-15 08:40:34.935284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.065 [2024-05-15 08:40:34.935316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.065 [2024-05-15 08:40:34.935338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.065 [2024-05-15 08:40:34.935923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.065 [2024-05-15 08:40:34.936223] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.065 [2024-05-15 08:40:34.936232] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.065 [2024-05-15 08:40:34.936238] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.065 [2024-05-15 08:40:34.938948] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.065 [2024-05-15 08:40:34.947667] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.065 [2024-05-15 08:40:34.948119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.065 [2024-05-15 08:40:34.948336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.065 [2024-05-15 08:40:34.948379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.065 [2024-05-15 08:40:34.948400] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.065 [2024-05-15 08:40:34.948986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.065 [2024-05-15 08:40:34.949287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.065 [2024-05-15 08:40:34.949295] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.065 [2024-05-15 08:40:34.949302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.065 [2024-05-15 08:40:34.952071] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.065 [2024-05-15 08:40:34.960588] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.065 [2024-05-15 08:40:34.961008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.065 [2024-05-15 08:40:34.961236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.065 [2024-05-15 08:40:34.961248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.065 [2024-05-15 08:40:34.961255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.065 [2024-05-15 08:40:34.961429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.065 [2024-05-15 08:40:34.961603] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.065 [2024-05-15 08:40:34.961611] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.065 [2024-05-15 08:40:34.961618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.065 [2024-05-15 08:40:34.964330] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.065 [2024-05-15 08:40:34.973447] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.065 [2024-05-15 08:40:34.973745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.065 [2024-05-15 08:40:34.973959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.065 [2024-05-15 08:40:34.973970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.065 [2024-05-15 08:40:34.973977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.065 [2024-05-15 08:40:34.974150] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.065 [2024-05-15 08:40:34.974333] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.065 [2024-05-15 08:40:34.974343] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.065 [2024-05-15 08:40:34.974349] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.065 [2024-05-15 08:40:34.977060] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.065 [2024-05-15 08:40:34.986324] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.065 [2024-05-15 08:40:34.986619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.065 [2024-05-15 08:40:34.986722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.065 [2024-05-15 08:40:34.986732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.065 [2024-05-15 08:40:34.986742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.065 [2024-05-15 08:40:34.986916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.065 [2024-05-15 08:40:34.987090] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.065 [2024-05-15 08:40:34.987098] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.065 [2024-05-15 08:40:34.987104] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.065 [2024-05-15 08:40:34.989821] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.065 [2024-05-15 08:40:34.999257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.065 [2024-05-15 08:40:34.999557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.065 [2024-05-15 08:40:34.999758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.065 [2024-05-15 08:40:34.999769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.065 [2024-05-15 08:40:34.999776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.065 [2024-05-15 08:40:34.999949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.065 [2024-05-15 08:40:35.000123] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.065 [2024-05-15 08:40:35.000131] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.065 [2024-05-15 08:40:35.000137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.065 [2024-05-15 08:40:35.002862] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.065 [2024-05-15 08:40:35.012139] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.065 [2024-05-15 08:40:35.012501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.065 [2024-05-15 08:40:35.012609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.065 [2024-05-15 08:40:35.012619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.065 [2024-05-15 08:40:35.012626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.065 [2024-05-15 08:40:35.012799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.065 [2024-05-15 08:40:35.012974] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.065 [2024-05-15 08:40:35.012982] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.065 [2024-05-15 08:40:35.012989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.065 [2024-05-15 08:40:35.015710] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.065 [2024-05-15 08:40:35.024984] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.065 [2024-05-15 08:40:35.025350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.065 [2024-05-15 08:40:35.025502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.065 [2024-05-15 08:40:35.025512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.065 [2024-05-15 08:40:35.025519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.065 [2024-05-15 08:40:35.025696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.065 [2024-05-15 08:40:35.025869] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.065 [2024-05-15 08:40:35.025877] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.065 [2024-05-15 08:40:35.025884] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.065 [2024-05-15 08:40:35.028602] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.065 [2024-05-15 08:40:35.037854] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.065 [2024-05-15 08:40:35.038271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.065 [2024-05-15 08:40:35.038427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.065 [2024-05-15 08:40:35.038438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.065 [2024-05-15 08:40:35.038444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.065 [2024-05-15 08:40:35.038619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.065 [2024-05-15 08:40:35.038795] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.066 [2024-05-15 08:40:35.038805] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.066 [2024-05-15 08:40:35.038811] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.066 [2024-05-15 08:40:35.041529] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.066 [2024-05-15 08:40:35.050804] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.066 [2024-05-15 08:40:35.051255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.066 [2024-05-15 08:40:35.051409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.066 [2024-05-15 08:40:35.051440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.066 [2024-05-15 08:40:35.051462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.066 [2024-05-15 08:40:35.051806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.066 [2024-05-15 08:40:35.052061] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.066 [2024-05-15 08:40:35.052072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.066 [2024-05-15 08:40:35.052080] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.066 [2024-05-15 08:40:35.056215] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.066 [2024-05-15 08:40:35.064096] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.066 [2024-05-15 08:40:35.064524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.066 [2024-05-15 08:40:35.064722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.066 [2024-05-15 08:40:35.064732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.066 [2024-05-15 08:40:35.064739] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.066 [2024-05-15 08:40:35.064913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.066 [2024-05-15 08:40:35.065090] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.066 [2024-05-15 08:40:35.065098] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.066 [2024-05-15 08:40:35.065104] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.066 [2024-05-15 08:40:35.067890] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 446890 Killed "${NVMF_APP[@]}" "$@" 00:28:48.066 08:40:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:48.066 08:40:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:48.066 08:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:48.066 08:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:48.066 08:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.066 [2024-05-15 08:40:35.077226] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.066 [2024-05-15 08:40:35.077638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.066 [2024-05-15 08:40:35.077858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.066 [2024-05-15 08:40:35.077869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.066 [2024-05-15 08:40:35.077876] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.066 [2024-05-15 08:40:35.078054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.066 [2024-05-15 08:40:35.078244] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.066 [2024-05-15 08:40:35.078254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.066 [2024-05-15 08:40:35.078261] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.066 08:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=448307 00:28:48.066 08:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 448307 00:28:48.066 08:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:48.066 08:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 448307 ']' 00:28:48.066 08:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.066 08:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:48.066 08:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.066 [2024-05-15 08:40:35.081135] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.066 08:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:48.066 08:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.327 [2024-05-15 08:40:35.090338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.327 [2024-05-15 08:40:35.090704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.327 [2024-05-15 08:40:35.090980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.327 [2024-05-15 08:40:35.090993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.327 [2024-05-15 08:40:35.091001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.327 [2024-05-15 08:40:35.091195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.327 [2024-05-15 08:40:35.091375] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.327 [2024-05-15 08:40:35.091384] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.327 [2024-05-15 08:40:35.091390] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.327 [2024-05-15 08:40:35.094273] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.327 [2024-05-15 08:40:35.103456] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.327 [2024-05-15 08:40:35.103903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.327 [2024-05-15 08:40:35.104127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.327 [2024-05-15 08:40:35.104138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.327 [2024-05-15 08:40:35.104145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.327 [2024-05-15 08:40:35.104331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.327 [2024-05-15 08:40:35.104530] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.327 [2024-05-15 08:40:35.104538] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.327 [2024-05-15 08:40:35.104544] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.327 [2024-05-15 08:40:35.107425] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.327 [2024-05-15 08:40:35.116580] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.327 [2024-05-15 08:40:35.117011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.327 [2024-05-15 08:40:35.117170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.327 [2024-05-15 08:40:35.117181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.327 [2024-05-15 08:40:35.117188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.327 [2024-05-15 08:40:35.117369] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.327 [2024-05-15 08:40:35.117548] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.327 [2024-05-15 08:40:35.117557] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.327 [2024-05-15 08:40:35.117563] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.327 [2024-05-15 08:40:35.120436] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.327 [2024-05-15 08:40:35.125437] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:28:48.327 [2024-05-15 08:40:35.125475] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.327 [2024-05-15 08:40:35.129764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.327 [2024-05-15 08:40:35.130182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.327 [2024-05-15 08:40:35.130336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.327 [2024-05-15 08:40:35.130347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.327 [2024-05-15 08:40:35.130357] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.327 [2024-05-15 08:40:35.130537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.327 [2024-05-15 08:40:35.130717] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.328 [2024-05-15 08:40:35.130726] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.328 [2024-05-15 08:40:35.130733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.328 [2024-05-15 08:40:35.133612] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.328 [2024-05-15 08:40:35.142943] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.328 [2024-05-15 08:40:35.143401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.328 [2024-05-15 08:40:35.143555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.328 [2024-05-15 08:40:35.143566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.328 [2024-05-15 08:40:35.143573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.328 [2024-05-15 08:40:35.143753] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.328 [2024-05-15 08:40:35.143933] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.328 [2024-05-15 08:40:35.143941] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.328 [2024-05-15 08:40:35.143948] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.328 [2024-05-15 08:40:35.146817] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.328 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.328 [2024-05-15 08:40:35.156145] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.328 [2024-05-15 08:40:35.156490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.328 [2024-05-15 08:40:35.156646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.328 [2024-05-15 08:40:35.156656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.328 [2024-05-15 08:40:35.156663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.328 [2024-05-15 08:40:35.156843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.328 [2024-05-15 08:40:35.157022] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.328 [2024-05-15 08:40:35.157030] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.328 [2024-05-15 08:40:35.157037] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.328 [2024-05-15 08:40:35.159908] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.328 [2024-05-15 08:40:35.169187] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.328 [2024-05-15 08:40:35.169588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.328 [2024-05-15 08:40:35.169787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.328 [2024-05-15 08:40:35.169798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.328 [2024-05-15 08:40:35.169810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.328 [2024-05-15 08:40:35.169988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.328 [2024-05-15 08:40:35.170174] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.328 [2024-05-15 08:40:35.170182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.328 [2024-05-15 08:40:35.170190] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.328 [2024-05-15 08:40:35.173023] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.328 [2024-05-15 08:40:35.182257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.328 [2024-05-15 08:40:35.182580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.328 [2024-05-15 08:40:35.182680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.328 [2024-05-15 08:40:35.182691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.328 [2024-05-15 08:40:35.182698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.328 [2024-05-15 08:40:35.182878] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.328 [2024-05-15 08:40:35.183057] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.328 [2024-05-15 08:40:35.183065] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.328 [2024-05-15 08:40:35.183072] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.328 [2024-05-15 08:40:35.183488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:48.328 [2024-05-15 08:40:35.185925] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.328 [2024-05-15 08:40:35.195285] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.328 [2024-05-15 08:40:35.195723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.328 [2024-05-15 08:40:35.195829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.328 [2024-05-15 08:40:35.195839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.328 [2024-05-15 08:40:35.195846] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.328 [2024-05-15 08:40:35.196027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.328 [2024-05-15 08:40:35.196213] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.328 [2024-05-15 08:40:35.196222] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.328 [2024-05-15 08:40:35.196229] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.328 [2024-05-15 08:40:35.199055] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.328 [2024-05-15 08:40:35.208341] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.328 [2024-05-15 08:40:35.208717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.328 [2024-05-15 08:40:35.208824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.328 [2024-05-15 08:40:35.208834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.328 [2024-05-15 08:40:35.208842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.328 [2024-05-15 08:40:35.209025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.328 [2024-05-15 08:40:35.209215] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.328 [2024-05-15 08:40:35.209225] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.328 [2024-05-15 08:40:35.209231] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.328 [2024-05-15 08:40:35.212107] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.328 [2024-05-15 08:40:35.221454] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.328 [2024-05-15 08:40:35.221890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.328 [2024-05-15 08:40:35.222090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.328 [2024-05-15 08:40:35.222101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.328 [2024-05-15 08:40:35.222108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.328 [2024-05-15 08:40:35.222295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.328 [2024-05-15 08:40:35.222475] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.328 [2024-05-15 08:40:35.222484] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.328 [2024-05-15 08:40:35.222490] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.328 [2024-05-15 08:40:35.225390] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.328 [2024-05-15 08:40:35.234670] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.328 [2024-05-15 08:40:35.235097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.328 [2024-05-15 08:40:35.235251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.328 [2024-05-15 08:40:35.235263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.328 [2024-05-15 08:40:35.235272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.328 [2024-05-15 08:40:35.235453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.328 [2024-05-15 08:40:35.235632] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.328 [2024-05-15 08:40:35.235641] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.328 [2024-05-15 08:40:35.235648] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.328 [2024-05-15 08:40:35.238491] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.328 [2024-05-15 08:40:35.247773] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.328 [2024-05-15 08:40:35.248223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.328 [2024-05-15 08:40:35.248377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.328 [2024-05-15 08:40:35.248388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.328 [2024-05-15 08:40:35.248395] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.328 [2024-05-15 08:40:35.248575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.329 [2024-05-15 08:40:35.248760] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.329 [2024-05-15 08:40:35.248768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.329 [2024-05-15 08:40:35.248774] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.329 [2024-05-15 08:40:35.251643] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.329 [2024-05-15 08:40:35.260972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.329 [2024-05-15 08:40:35.261336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.329 [2024-05-15 08:40:35.261437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.329 [2024-05-15 08:40:35.261451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.329 [2024-05-15 08:40:35.261458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.329 [2024-05-15 08:40:35.261638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.329 [2024-05-15 08:40:35.261818] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.329 [2024-05-15 08:40:35.261826] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.329 [2024-05-15 08:40:35.261833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.329 [2024-05-15 08:40:35.264709] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.329 [2024-05-15 08:40:35.265147] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.329 [2024-05-15 08:40:35.265175] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.329 [2024-05-15 08:40:35.265182] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.329 [2024-05-15 08:40:35.265188] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.329 [2024-05-15 08:40:35.265193] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.329 [2024-05-15 08:40:35.265223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.329 [2024-05-15 08:40:35.265310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.329 [2024-05-15 08:40:35.265311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.329 [2024-05-15 08:40:35.274210] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.329 [2024-05-15 08:40:35.274525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.329 [2024-05-15 08:40:35.274751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.329 [2024-05-15 08:40:35.274762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.329 [2024-05-15 08:40:35.274770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.329 [2024-05-15 08:40:35.274952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.329 [2024-05-15 08:40:35.275132] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.329 [2024-05-15 08:40:35.275141] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.329 [2024-05-15 08:40:35.275148] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.329 [2024-05-15 08:40:35.278023] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.329 [2024-05-15 08:40:35.287335] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.329 [2024-05-15 08:40:35.287786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.329 [2024-05-15 08:40:35.287931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.329 [2024-05-15 08:40:35.287941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.329 [2024-05-15 08:40:35.287950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.329 [2024-05-15 08:40:35.288131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.329 [2024-05-15 08:40:35.288319] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.329 [2024-05-15 08:40:35.288328] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.329 [2024-05-15 08:40:35.288335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.329 [2024-05-15 08:40:35.291211] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.329 [2024-05-15 08:40:35.300508] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.329 [2024-05-15 08:40:35.300965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.329 [2024-05-15 08:40:35.301193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.329 [2024-05-15 08:40:35.301205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.329 [2024-05-15 08:40:35.301213] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.329 [2024-05-15 08:40:35.301394] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.329 [2024-05-15 08:40:35.301574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.329 [2024-05-15 08:40:35.301582] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.329 [2024-05-15 08:40:35.301589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.329 [2024-05-15 08:40:35.304457] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.329 [2024-05-15 08:40:35.313649] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.329 [2024-05-15 08:40:35.314082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.329 [2024-05-15 08:40:35.314293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.329 [2024-05-15 08:40:35.314316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.329 [2024-05-15 08:40:35.314324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.329 [2024-05-15 08:40:35.314506] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.329 [2024-05-15 08:40:35.314685] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.329 [2024-05-15 08:40:35.314694] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.329 [2024-05-15 08:40:35.314700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.329 [2024-05-15 08:40:35.317560] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.329 [2024-05-15 08:40:35.326862] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.329 [2024-05-15 08:40:35.327234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.329 [2024-05-15 08:40:35.327385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.329 [2024-05-15 08:40:35.327396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.329 [2024-05-15 08:40:35.327403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.329 [2024-05-15 08:40:35.327584] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.329 [2024-05-15 08:40:35.327764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.329 [2024-05-15 08:40:35.327773] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.329 [2024-05-15 08:40:35.327780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.329 [2024-05-15 08:40:35.330650] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.329 [2024-05-15 08:40:35.339958] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.329 [2024-05-15 08:40:35.340394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.329 [2024-05-15 08:40:35.340593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.329 [2024-05-15 08:40:35.340604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.329 [2024-05-15 08:40:35.340611] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.329 [2024-05-15 08:40:35.340791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.329 [2024-05-15 08:40:35.340971] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.329 [2024-05-15 08:40:35.340979] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.329 [2024-05-15 08:40:35.340986] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.329 [2024-05-15 08:40:35.343843] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.590 [2024-05-15 08:40:35.353184] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.590 [2024-05-15 08:40:35.353624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.590 [2024-05-15 08:40:35.353755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.590 [2024-05-15 08:40:35.353766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.590 [2024-05-15 08:40:35.353773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.590 [2024-05-15 08:40:35.353952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.590 [2024-05-15 08:40:35.354132] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.590 [2024-05-15 08:40:35.354140] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.590 [2024-05-15 08:40:35.354146] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.590 [2024-05-15 08:40:35.357019] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.590 [2024-05-15 08:40:35.366317] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.590 [2024-05-15 08:40:35.366736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.590 [2024-05-15 08:40:35.366819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.590 [2024-05-15 08:40:35.366829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.590 [2024-05-15 08:40:35.366836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.590 [2024-05-15 08:40:35.367014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.590 [2024-05-15 08:40:35.367198] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.590 [2024-05-15 08:40:35.367207] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.591 [2024-05-15 08:40:35.367213] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.591 [2024-05-15 08:40:35.370074] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.591 [2024-05-15 08:40:35.379540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.591 [2024-05-15 08:40:35.379977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.591 [2024-05-15 08:40:35.380202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.591 [2024-05-15 08:40:35.380214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.591 [2024-05-15 08:40:35.380221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.591 [2024-05-15 08:40:35.380400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.591 [2024-05-15 08:40:35.380579] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.591 [2024-05-15 08:40:35.380587] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.591 [2024-05-15 08:40:35.380593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.591 [2024-05-15 08:40:35.383456] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.591 [2024-05-15 08:40:35.392751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.591 [2024-05-15 08:40:35.393190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.591 [2024-05-15 08:40:35.393413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.591 [2024-05-15 08:40:35.393423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.591 [2024-05-15 08:40:35.393430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.591 [2024-05-15 08:40:35.393608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.591 [2024-05-15 08:40:35.393787] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.591 [2024-05-15 08:40:35.393796] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.591 [2024-05-15 08:40:35.393802] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.591 [2024-05-15 08:40:35.396665] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.591 [2024-05-15 08:40:35.405972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.591 [2024-05-15 08:40:35.406410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.591 [2024-05-15 08:40:35.406507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.591 [2024-05-15 08:40:35.406516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.591 [2024-05-15 08:40:35.406527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.591 [2024-05-15 08:40:35.406706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.591 [2024-05-15 08:40:35.406885] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.591 [2024-05-15 08:40:35.406893] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.591 [2024-05-15 08:40:35.406899] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.591 [2024-05-15 08:40:35.409776] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.591 [2024-05-15 08:40:35.419075] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.591 [2024-05-15 08:40:35.419500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.591 [2024-05-15 08:40:35.419717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.591 [2024-05-15 08:40:35.419728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.591 [2024-05-15 08:40:35.419735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.591 [2024-05-15 08:40:35.419914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.591 [2024-05-15 08:40:35.420093] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.591 [2024-05-15 08:40:35.420101] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.591 [2024-05-15 08:40:35.420107] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.591 [2024-05-15 08:40:35.422968] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.591 [2024-05-15 08:40:35.432269] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.591 [2024-05-15 08:40:35.432684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.591 [2024-05-15 08:40:35.432911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.591 [2024-05-15 08:40:35.432921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.591 [2024-05-15 08:40:35.432928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.591 [2024-05-15 08:40:35.433106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.591 [2024-05-15 08:40:35.433290] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.591 [2024-05-15 08:40:35.433299] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.591 [2024-05-15 08:40:35.433305] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.591 [2024-05-15 08:40:35.436162] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.591 [2024-05-15 08:40:35.445399] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.591 [2024-05-15 08:40:35.445850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.591 [2024-05-15 08:40:35.446029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.591 [2024-05-15 08:40:35.446039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.591 [2024-05-15 08:40:35.446047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.591 [2024-05-15 08:40:35.446234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.591 [2024-05-15 08:40:35.446414] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.591 [2024-05-15 08:40:35.446422] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.591 [2024-05-15 08:40:35.446429] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.591 [2024-05-15 08:40:35.449288] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.591 [2024-05-15 08:40:35.458577] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.591 [2024-05-15 08:40:35.459020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.591 [2024-05-15 08:40:35.459245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.591 [2024-05-15 08:40:35.459256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.591 [2024-05-15 08:40:35.459263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.591 [2024-05-15 08:40:35.459442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.591 [2024-05-15 08:40:35.459621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.591 [2024-05-15 08:40:35.459629] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.591 [2024-05-15 08:40:35.459636] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.591 [2024-05-15 08:40:35.462499] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.591 [2024-05-15 08:40:35.471787] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.591 [2024-05-15 08:40:35.472222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.591 [2024-05-15 08:40:35.472370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.591 [2024-05-15 08:40:35.472381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.591 [2024-05-15 08:40:35.472388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.591 [2024-05-15 08:40:35.472567] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.591 [2024-05-15 08:40:35.472746] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.591 [2024-05-15 08:40:35.472754] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.591 [2024-05-15 08:40:35.472761] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.591 [2024-05-15 08:40:35.475617] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.591 [2024-05-15 08:40:35.484945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.592 [2024-05-15 08:40:35.485385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.592 [2024-05-15 08:40:35.485586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.592 [2024-05-15 08:40:35.485596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.592 [2024-05-15 08:40:35.485603] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.592 [2024-05-15 08:40:35.485785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.592 [2024-05-15 08:40:35.485964] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.592 [2024-05-15 08:40:35.485972] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.592 [2024-05-15 08:40:35.485978] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.592 [2024-05-15 08:40:35.488839] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.592 [2024-05-15 08:40:35.498130] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.592 [2024-05-15 08:40:35.498551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.592 [2024-05-15 08:40:35.498752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.592 [2024-05-15 08:40:35.498762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.592 [2024-05-15 08:40:35.498769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.592 [2024-05-15 08:40:35.498948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.592 [2024-05-15 08:40:35.499127] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.592 [2024-05-15 08:40:35.499135] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.592 [2024-05-15 08:40:35.499142] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.592 [2024-05-15 08:40:35.502002] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.592 [2024-05-15 08:40:35.511300] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.592 [2024-05-15 08:40:35.511732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.592 [2024-05-15 08:40:35.511951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.592 [2024-05-15 08:40:35.511961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.592 [2024-05-15 08:40:35.511968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.592 [2024-05-15 08:40:35.512147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.592 [2024-05-15 08:40:35.512331] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.592 [2024-05-15 08:40:35.512340] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.592 [2024-05-15 08:40:35.512346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.592 [2024-05-15 08:40:35.515244] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.592 [2024-05-15 08:40:35.524533] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.592 [2024-05-15 08:40:35.524966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.592 [2024-05-15 08:40:35.525172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.592 [2024-05-15 08:40:35.525183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.592 [2024-05-15 08:40:35.525190] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.592 [2024-05-15 08:40:35.525370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.592 [2024-05-15 08:40:35.525552] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.592 [2024-05-15 08:40:35.525560] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.592 [2024-05-15 08:40:35.525566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.592 [2024-05-15 08:40:35.528432] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.592 [2024-05-15 08:40:35.537711] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.592 [2024-05-15 08:40:35.538144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.592 [2024-05-15 08:40:35.538348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.592 [2024-05-15 08:40:35.538359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.592 [2024-05-15 08:40:35.538366] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.592 [2024-05-15 08:40:35.538546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.592 [2024-05-15 08:40:35.538725] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.592 [2024-05-15 08:40:35.538733] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.592 [2024-05-15 08:40:35.538740] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.592 [2024-05-15 08:40:35.541603] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.592 [2024-05-15 08:40:35.550892] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.592 [2024-05-15 08:40:35.551321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.592 [2024-05-15 08:40:35.551545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.592 [2024-05-15 08:40:35.551555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.592 [2024-05-15 08:40:35.551562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.592 [2024-05-15 08:40:35.551740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.592 [2024-05-15 08:40:35.551919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.592 [2024-05-15 08:40:35.551928] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.592 [2024-05-15 08:40:35.551934] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.592 [2024-05-15 08:40:35.554800] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.592 [2024-05-15 08:40:35.564080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.592 [2024-05-15 08:40:35.564517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.592 [2024-05-15 08:40:35.564729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.592 [2024-05-15 08:40:35.564739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.592 [2024-05-15 08:40:35.564746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.592 [2024-05-15 08:40:35.564925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.592 [2024-05-15 08:40:35.565104] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.592 [2024-05-15 08:40:35.565112] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.592 [2024-05-15 08:40:35.565121] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.592 [2024-05-15 08:40:35.567982] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.592 [2024-05-15 08:40:35.577266] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.592 [2024-05-15 08:40:35.577609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.592 [2024-05-15 08:40:35.577826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.592 [2024-05-15 08:40:35.577836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.592 [2024-05-15 08:40:35.577844] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.592 [2024-05-15 08:40:35.578024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.592 [2024-05-15 08:40:35.578208] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.592 [2024-05-15 08:40:35.578217] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.592 [2024-05-15 08:40:35.578223] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.592 [2024-05-15 08:40:35.581071] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.592 [2024-05-15 08:40:35.590381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.592 [2024-05-15 08:40:35.590815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.592 [2024-05-15 08:40:35.590977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.592 [2024-05-15 08:40:35.590988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.592 [2024-05-15 08:40:35.590995] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.592 [2024-05-15 08:40:35.591178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.592 [2024-05-15 08:40:35.591358] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.593 [2024-05-15 08:40:35.591367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.593 [2024-05-15 08:40:35.591373] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.593 [2024-05-15 08:40:35.594229] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.593 [2024-05-15 08:40:35.603514] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.593 [2024-05-15 08:40:35.603943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.593 [2024-05-15 08:40:35.604117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.593 [2024-05-15 08:40:35.604128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.593 [2024-05-15 08:40:35.604136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.593 [2024-05-15 08:40:35.604320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.593 [2024-05-15 08:40:35.604498] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.593 [2024-05-15 08:40:35.604507] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.593 [2024-05-15 08:40:35.604517] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.593 [2024-05-15 08:40:35.607387] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.853 [2024-05-15 08:40:35.616697] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.853 [2024-05-15 08:40:35.617056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.853 [2024-05-15 08:40:35.617278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.853 [2024-05-15 08:40:35.617290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.853 [2024-05-15 08:40:35.617297] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.853 [2024-05-15 08:40:35.617476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.853 [2024-05-15 08:40:35.617657] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.853 [2024-05-15 08:40:35.617666] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.853 [2024-05-15 08:40:35.617673] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.853 [2024-05-15 08:40:35.620536] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.853 [2024-05-15 08:40:35.629831] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.853 [2024-05-15 08:40:35.630264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.853 [2024-05-15 08:40:35.630492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.853 [2024-05-15 08:40:35.630502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.853 [2024-05-15 08:40:35.630509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.853 [2024-05-15 08:40:35.630687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.853 [2024-05-15 08:40:35.630867] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.853 [2024-05-15 08:40:35.630875] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.853 [2024-05-15 08:40:35.630881] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.853 [2024-05-15 08:40:35.633744] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.853 [2024-05-15 08:40:35.643050] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.853 [2024-05-15 08:40:35.643383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.853 [2024-05-15 08:40:35.643554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.853 [2024-05-15 08:40:35.643564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.853 [2024-05-15 08:40:35.643571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.853 [2024-05-15 08:40:35.643750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.853 [2024-05-15 08:40:35.643930] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.853 [2024-05-15 08:40:35.643938] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.853 [2024-05-15 08:40:35.643944] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.853 [2024-05-15 08:40:35.646808] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.854 [2024-05-15 08:40:35.656273] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.854 [2024-05-15 08:40:35.656703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-05-15 08:40:35.656874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-05-15 08:40:35.656884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-05-15 08:40:35.656891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.854 [2024-05-15 08:40:35.657070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.854 [2024-05-15 08:40:35.657253] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.854 [2024-05-15 08:40:35.657261] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.854 [2024-05-15 08:40:35.657267] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.854 [2024-05-15 08:40:35.660126] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.854 [2024-05-15 08:40:35.669412] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.854 [2024-05-15 08:40:35.669701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-05-15 08:40:35.669857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-05-15 08:40:35.669868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-05-15 08:40:35.669875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.854 [2024-05-15 08:40:35.670054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.854 [2024-05-15 08:40:35.670236] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.854 [2024-05-15 08:40:35.670245] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.854 [2024-05-15 08:40:35.670251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.854 [2024-05-15 08:40:35.673112] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.854 [2024-05-15 08:40:35.682571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.854 [2024-05-15 08:40:35.683002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-05-15 08:40:35.683095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-05-15 08:40:35.683106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-05-15 08:40:35.683113] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.854 [2024-05-15 08:40:35.683295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.854 [2024-05-15 08:40:35.683478] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.854 [2024-05-15 08:40:35.683486] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.854 [2024-05-15 08:40:35.683492] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.854 [2024-05-15 08:40:35.686349] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.854 [2024-05-15 08:40:35.695802] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.854 [2024-05-15 08:40:35.696242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-05-15 08:40:35.696338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-05-15 08:40:35.696348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-05-15 08:40:35.696355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.854 [2024-05-15 08:40:35.696535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.854 [2024-05-15 08:40:35.696714] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.854 [2024-05-15 08:40:35.696722] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.854 [2024-05-15 08:40:35.696728] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.854 [2024-05-15 08:40:35.699592] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.854 [2024-05-15 08:40:35.708876] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.854 [2024-05-15 08:40:35.709296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-05-15 08:40:35.709518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-05-15 08:40:35.709528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-05-15 08:40:35.709535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.854 [2024-05-15 08:40:35.709714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.854 [2024-05-15 08:40:35.709893] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.854 [2024-05-15 08:40:35.709902] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.854 [2024-05-15 08:40:35.709908] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.854 [2024-05-15 08:40:35.712771] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.854 [2024-05-15 08:40:35.722055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.854 [2024-05-15 08:40:35.722516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-05-15 08:40:35.722717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-05-15 08:40:35.722727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-05-15 08:40:35.722734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.854 [2024-05-15 08:40:35.722914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.854 [2024-05-15 08:40:35.723093] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.854 [2024-05-15 08:40:35.723101] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.854 [2024-05-15 08:40:35.723107] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.854 [2024-05-15 08:40:35.725969] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.854 [2024-05-15 08:40:35.735257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.854 [2024-05-15 08:40:35.735691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-05-15 08:40:35.735916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-05-15 08:40:35.735927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-05-15 08:40:35.735934] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.854 [2024-05-15 08:40:35.736113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.854 [2024-05-15 08:40:35.736296] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.854 [2024-05-15 08:40:35.736305] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.854 [2024-05-15 08:40:35.736312] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.854 [2024-05-15 08:40:35.739175] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.854 [2024-05-15 08:40:35.748472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.854 [2024-05-15 08:40:35.748907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-05-15 08:40:35.749056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-05-15 08:40:35.749066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-05-15 08:40:35.749073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.854 [2024-05-15 08:40:35.749257] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.854 [2024-05-15 08:40:35.749437] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.854 [2024-05-15 08:40:35.749445] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.854 [2024-05-15 08:40:35.749451] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.854 [2024-05-15 08:40:35.752315] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.854 [2024-05-15 08:40:35.761604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.854 [2024-05-15 08:40:35.761959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-05-15 08:40:35.762109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-05-15 08:40:35.762119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-05-15 08:40:35.762126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.854 [2024-05-15 08:40:35.762308] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.854 [2024-05-15 08:40:35.762488] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.854 [2024-05-15 08:40:35.762496] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.854 [2024-05-15 08:40:35.762503] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.854 [2024-05-15 08:40:35.765363] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.854 [2024-05-15 08:40:35.774818] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.854 [2024-05-15 08:40:35.775177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.855 [2024-05-15 08:40:35.775392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.855 [2024-05-15 08:40:35.775404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.855 [2024-05-15 08:40:35.775416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.855 [2024-05-15 08:40:35.775594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.855 [2024-05-15 08:40:35.775774] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.855 [2024-05-15 08:40:35.775782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.855 [2024-05-15 08:40:35.775788] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.855 [2024-05-15 08:40:35.778652] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.855 [2024-05-15 08:40:35.787948] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.855 [2024-05-15 08:40:35.788310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.855 [2024-05-15 08:40:35.788515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.855 [2024-05-15 08:40:35.788526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.855 [2024-05-15 08:40:35.788532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.855 [2024-05-15 08:40:35.788711] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.855 [2024-05-15 08:40:35.788891] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.855 [2024-05-15 08:40:35.788899] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.855 [2024-05-15 08:40:35.788906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.855 [2024-05-15 08:40:35.791766] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.855 [2024-05-15 08:40:35.801048] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.855 [2024-05-15 08:40:35.801476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.855 [2024-05-15 08:40:35.801613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.855 [2024-05-15 08:40:35.801624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.855 [2024-05-15 08:40:35.801631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.855 [2024-05-15 08:40:35.801809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.855 [2024-05-15 08:40:35.801988] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.855 [2024-05-15 08:40:35.801997] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.855 [2024-05-15 08:40:35.802004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.855 [2024-05-15 08:40:35.804870] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.855 [2024-05-15 08:40:35.814169] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.855 [2024-05-15 08:40:35.814493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.855 [2024-05-15 08:40:35.814644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.855 [2024-05-15 08:40:35.814654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.855 [2024-05-15 08:40:35.814665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.855 [2024-05-15 08:40:35.814844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.855 [2024-05-15 08:40:35.815023] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.855 [2024-05-15 08:40:35.815031] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.855 [2024-05-15 08:40:35.815038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.855 [2024-05-15 08:40:35.817901] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.855 [2024-05-15 08:40:35.827355] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.855 [2024-05-15 08:40:35.827785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.855 [2024-05-15 08:40:35.827938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.855 [2024-05-15 08:40:35.827949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.855 [2024-05-15 08:40:35.827957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.855 [2024-05-15 08:40:35.828135] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.855 [2024-05-15 08:40:35.828318] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.855 [2024-05-15 08:40:35.828327] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.855 [2024-05-15 08:40:35.828334] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.855 [2024-05-15 08:40:35.831194] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.855 [2024-05-15 08:40:35.840490] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.855 [2024-05-15 08:40:35.840927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.855 [2024-05-15 08:40:35.841147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.855 [2024-05-15 08:40:35.841158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.855 [2024-05-15 08:40:35.841169] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.855 [2024-05-15 08:40:35.841348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.855 [2024-05-15 08:40:35.841527] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.855 [2024-05-15 08:40:35.841536] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.855 [2024-05-15 08:40:35.841542] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.855 [2024-05-15 08:40:35.844403] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.855 [2024-05-15 08:40:35.853685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.855 [2024-05-15 08:40:35.854029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.855 [2024-05-15 08:40:35.854213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.855 [2024-05-15 08:40:35.854225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.855 [2024-05-15 08:40:35.854232] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.855 [2024-05-15 08:40:35.854415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.855 [2024-05-15 08:40:35.854594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.855 [2024-05-15 08:40:35.854602] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.855 [2024-05-15 08:40:35.854608] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.855 [2024-05-15 08:40:35.857468] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.855 [2024-05-15 08:40:35.866760] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.855 [2024-05-15 08:40:35.867174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.855 [2024-05-15 08:40:35.867375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.855 [2024-05-15 08:40:35.867387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:48.855 [2024-05-15 08:40:35.867394] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:48.855 [2024-05-15 08:40:35.867573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:48.855 [2024-05-15 08:40:35.867753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.855 [2024-05-15 08:40:35.867762] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.855 [2024-05-15 08:40:35.867768] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.855 [2024-05-15 08:40:35.870631] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.116 [2024-05-15 08:40:35.879936] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.116 [2024-05-15 08:40:35.880373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.116 [2024-05-15 08:40:35.880596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.116 [2024-05-15 08:40:35.880607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:49.116 [2024-05-15 08:40:35.880614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:49.116 [2024-05-15 08:40:35.880794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:49.116 [2024-05-15 08:40:35.880973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.116 [2024-05-15 08:40:35.880982] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.116 [2024-05-15 08:40:35.880989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.116 [2024-05-15 08:40:35.883855] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.116 [2024-05-15 08:40:35.893160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.116 [2024-05-15 08:40:35.893597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.116 [2024-05-15 08:40:35.893745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.116 [2024-05-15 08:40:35.893756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:49.116 [2024-05-15 08:40:35.893763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:49.116 [2024-05-15 08:40:35.893943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:49.116 [2024-05-15 08:40:35.894126] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.116 [2024-05-15 08:40:35.894135] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.116 [2024-05-15 08:40:35.894141] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.116 [2024-05-15 08:40:35.897001] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.116 [2024-05-15 08:40:35.906291] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.116 [2024-05-15 08:40:35.906723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.116 [2024-05-15 08:40:35.906870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.116 [2024-05-15 08:40:35.906881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:49.116 [2024-05-15 08:40:35.906887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:49.116 [2024-05-15 08:40:35.907067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:49.116 [2024-05-15 08:40:35.907255] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.116 [2024-05-15 08:40:35.907264] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.116 [2024-05-15 08:40:35.907271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.116 [2024-05-15 08:40:35.910128] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.116 [2024-05-15 08:40:35.919409] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.116 [2024-05-15 08:40:35.919764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.116 [2024-05-15 08:40:35.919992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.116 [2024-05-15 08:40:35.920002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:49.116 [2024-05-15 08:40:35.920009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:49.116 [2024-05-15 08:40:35.920193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:49.116 [2024-05-15 08:40:35.920373] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.116 [2024-05-15 08:40:35.920381] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.116 [2024-05-15 08:40:35.920387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.116 [2024-05-15 08:40:35.923248] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.116 [2024-05-15 08:40:35.932548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.116 [2024-05-15 08:40:35.932985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.116 [2024-05-15 08:40:35.933131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.116 [2024-05-15 08:40:35.933141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:49.116 [2024-05-15 08:40:35.933148] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:49.116 [2024-05-15 08:40:35.933331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:49.116 [2024-05-15 08:40:35.933510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.116 [2024-05-15 08:40:35.933521] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.116 [2024-05-15 08:40:35.933527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.116 [2024-05-15 08:40:35.936388] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.116 08:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:49.117 08:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:28:49.117 08:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:49.117 08:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:49.117 08:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.117 [2024-05-15 08:40:35.945682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.117 [2024-05-15 08:40:35.946112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-05-15 08:40:35.946333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-05-15 08:40:35.946344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:49.117 [2024-05-15 08:40:35.946351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:49.117 [2024-05-15 08:40:35.946531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:49.117 [2024-05-15 08:40:35.946711] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.117 [2024-05-15 08:40:35.946720] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.117 [2024-05-15 08:40:35.946726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.117 [2024-05-15 08:40:35.949590] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.117 [2024-05-15 08:40:35.958878] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.117 [2024-05-15 08:40:35.959290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-05-15 08:40:35.959488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-05-15 08:40:35.959499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:49.117 [2024-05-15 08:40:35.959506] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:49.117 [2024-05-15 08:40:35.959685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:49.117 [2024-05-15 08:40:35.959865] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.117 [2024-05-15 08:40:35.959873] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.117 [2024-05-15 08:40:35.959880] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.117 [2024-05-15 08:40:35.962744] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.117 [2024-05-15 08:40:35.972035] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.117 [2024-05-15 08:40:35.972452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-05-15 08:40:35.972592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-05-15 08:40:35.972602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:49.117 [2024-05-15 08:40:35.972610] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:49.117 [2024-05-15 08:40:35.972788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:49.117 [2024-05-15 08:40:35.972971] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.117 08:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.117 [2024-05-15 08:40:35.972981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.117 [2024-05-15 08:40:35.972992] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.117 08:40:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:49.117 08:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.117 08:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.117 [2024-05-15 08:40:35.975853] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.117 [2024-05-15 08:40:35.977292] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.117 08:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.117 08:40:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:49.117 08:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.117 08:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.117 [2024-05-15 08:40:35.985146] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.117 [2024-05-15 08:40:35.985507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-05-15 08:40:35.985706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-05-15 08:40:35.985716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:49.117 [2024-05-15 08:40:35.985723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:49.117 [2024-05-15 08:40:35.985901] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:49.117 [2024-05-15 08:40:35.986080] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.117 [2024-05-15 08:40:35.986088] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.117 [2024-05-15 08:40:35.986095] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.117 [2024-05-15 08:40:35.988956] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.117 [2024-05-15 08:40:35.998455] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.117 [2024-05-15 08:40:35.998892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-05-15 08:40:35.999068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-05-15 08:40:35.999079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:49.117 [2024-05-15 08:40:35.999086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:49.117 [2024-05-15 08:40:35.999271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:49.117 [2024-05-15 08:40:35.999451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.117 [2024-05-15 08:40:35.999459] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.117 [2024-05-15 08:40:35.999466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.117 [2024-05-15 08:40:36.002331] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.117 Malloc0 00:28:49.117 08:40:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.117 08:40:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:49.117 08:40:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.117 08:40:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.117 [2024-05-15 08:40:36.011635] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.117 [2024-05-15 08:40:36.011968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-05-15 08:40:36.012112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-05-15 08:40:36.012123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:49.117 [2024-05-15 08:40:36.012130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:49.117 [2024-05-15 08:40:36.012315] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:49.117 [2024-05-15 08:40:36.012495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.117 [2024-05-15 08:40:36.012503] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.117 [2024-05-15 08:40:36.012509] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.117 08:40:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.117 [2024-05-15 08:40:36.015374] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.117 08:40:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:49.117 08:40:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.117 08:40:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.117 08:40:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.117 08:40:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.117 08:40:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.117 08:40:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.118 [2024-05-15 08:40:36.024826] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.118 [2024-05-15 08:40:36.025256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.118 [2024-05-15 08:40:36.025408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.118 [2024-05-15 08:40:36.025419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161e840 with addr=10.0.0.2, port=4420 00:28:49.118 [2024-05-15 08:40:36.025426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161e840 is same with the state(5) to be set 00:28:49.118 [2024-05-15 08:40:36.025605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e840 (9): Bad file descriptor 00:28:49.118 [2024-05-15 08:40:36.025783] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.118 [2024-05-15 08:40:36.025792] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.118 [2024-05-15 08:40:36.025799] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.118 [2024-05-15 08:40:36.026693] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:49.118 [2024-05-15 08:40:36.026895] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.118 [2024-05-15 08:40:36.028659] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.118 08:40:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.118 08:40:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 447265 00:28:49.118 [2024-05-15 08:40:36.037959] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.377 [2024-05-15 08:40:36.194227] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:57.493 00:28:57.493 Latency(us) 00:28:57.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.493 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:57.493 Verification LBA range: start 0x0 length 0x4000 00:28:57.493 Nvme1n1 : 15.01 8053.36 31.46 12763.36 0.00 6128.81 445.22 14816.83 00:28:57.493 =================================================================================================================== 00:28:57.493 Total : 8053.36 31.46 12763.36 0.00 6128.81 445.22 14816.83 00:28:57.751 08:40:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:57.751 08:40:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:57.751 08:40:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.751 08:40:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.751 08:40:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.751 08:40:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:57.751 08:40:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:57.751 08:40:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:57.751 08:40:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:28:57.751 08:40:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:57.751 08:40:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:28:57.751 08:40:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:57.751 08:40:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:57.751 rmmod nvme_tcp 00:28:57.751 rmmod nvme_fabrics 00:28:57.751 rmmod nvme_keyring 00:28:57.752 08:40:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:57.752 08:40:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:28:57.752 08:40:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:28:57.752 08:40:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 448307 ']' 00:28:57.752 08:40:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 448307 00:28:57.752 08:40:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 448307 ']' 00:28:57.752 08:40:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 448307 00:28:57.752 08:40:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:28:57.752 08:40:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:57.752 08:40:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 448307 00:28:57.752 08:40:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:57.752 08:40:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:57.752 08:40:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 448307' 00:28:57.752 killing process with pid 448307 00:28:57.752 08:40:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 448307 00:28:57.752 [2024-05-15 08:40:44.760998] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:57.752 08:40:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 448307 00:28:58.010 08:40:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:58.010 08:40:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:58.010 08:40:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:58.010 08:40:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:58.010 08:40:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:58.010 08:40:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.010 08:40:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:58.010 08:40:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.548 08:40:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:00.548 00:29:00.548 real 0m26.077s 00:29:00.548 user 1m2.584s 00:29:00.548 sys 0m6.124s 00:29:00.548 08:40:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:00.548 08:40:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.548 ************************************ 00:29:00.548 END TEST nvmf_bdevperf 00:29:00.548 ************************************ 00:29:00.548 08:40:47 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:00.548 08:40:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:00.548 08:40:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:00.548 08:40:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:00.548 ************************************ 00:29:00.548 START TEST nvmf_target_disconnect 00:29:00.548 ************************************ 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:00.548 * Looking for test storage... 00:29:00.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestinit 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:00.548 08:40:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:05.816 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:05.816 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:05.816 Found net devices under 0000:86:00.0: cvl_0_0 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.816 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:05.816 Found net devices under 0000:86:00.1: cvl_0_1 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:05.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:05.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:29:05.817 00:29:05.817 --- 10.0.0.2 ping statistics --- 00:29:05.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.817 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:05.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:05.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:29:05.817 00:29:05.817 --- 10.0.0.1 ping statistics --- 00:29:05.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.817 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:05.817 ************************************ 00:29:05.817 START TEST nvmf_target_disconnect_tc1 00:29:05.817 ************************************ 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # set +e 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:05.817 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.817 [2024-05-15 08:40:52.637135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-05-15 08:40:52.637393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-05-15 08:40:52.637405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2123ae0 with addr=10.0.0.2, port=4420 00:29:05.817 [2024-05-15 08:40:52.637425] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:05.817 [2024-05-15 08:40:52.637436] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:05.817 [2024-05-15 08:40:52.637442] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:05.817 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:05.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:05.817 Initializing NVMe Controllers 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # trap - ERR 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # print_backtrace 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # return 0 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@41 -- # set -e 00:29:05.817 00:29:05.817 real 0m0.087s 00:29:05.817 user 0m0.038s 00:29:05.817 sys 0m0.048s 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:05.817 ************************************ 00:29:05.817 END TEST nvmf_target_disconnect_tc1 00:29:05.817 ************************************ 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:05.817 ************************************ 00:29:05.817 START TEST nvmf_target_disconnect_tc2 00:29:05.817 ************************************ 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=453338 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 453338 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 453338 ']' 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:05.817 08:40:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.817 [2024-05-15 08:40:52.765692] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:29:05.818 [2024-05-15 08:40:52.765730] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.818 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.818 [2024-05-15 08:40:52.834874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:06.076 [2024-05-15 08:40:52.914830] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.076 [2024-05-15 08:40:52.914865] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.076 [2024-05-15 08:40:52.914871] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.076 [2024-05-15 08:40:52.914877] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.076 [2024-05-15 08:40:52.914883] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.076 [2024-05-15 08:40:52.915481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:06.076 [2024-05-15 08:40:52.915567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:06.076 [2024-05-15 08:40:52.915583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:06.076 [2024-05-15 08:40:52.915584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:06.640 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:06.640 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:29:06.640 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:06.640 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.640 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.640 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.640 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:06.640 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.640 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.640 Malloc0 00:29:06.640 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.640 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:06.641 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.641 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.641 [2024-05-15 08:40:53.642725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.641 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.641 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:06.641 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.641 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.641 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.641 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:06.641 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.899 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.899 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.899 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.899 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.899 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.899 [2024-05-15 08:40:53.674766] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:06.899 [2024-05-15 08:40:53.674980] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.899 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.899 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:06.899 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.899 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.899 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.899 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # reconnectpid=453536 00:29:06.899 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@52 -- # sleep 2 00:29:06.899 08:40:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:06.899 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.808 08:40:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@53 -- # kill -9 453338 00:29:08.808 08:40:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@55 -- # sleep 2 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 [2024-05-15 08:40:55.702228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Read completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.808 Write completed with error (sct=0, sc=8) 00:29:08.808 starting I/O failed 00:29:08.809 Write completed with error (sct=0, sc=8) 00:29:08.809 starting I/O failed 00:29:08.809 Read completed with error (sct=0, sc=8) 00:29:08.809 starting I/O failed 00:29:08.809 Read completed with error (sct=0, sc=8) 00:29:08.809 starting I/O failed 00:29:08.809 Read completed with error (sct=0, sc=8) 00:29:08.809 starting I/O failed 00:29:08.809 Read completed with error (sct=0, sc=8) 00:29:08.809 starting I/O failed 00:29:08.809 Write completed with error (sct=0, sc=8) 00:29:08.809 starting I/O failed 00:29:08.809 Read completed with error (sct=0, sc=8) 00:29:08.809 starting I/O failed 00:29:08.809 Write completed with error (sct=0, sc=8) 00:29:08.809 starting I/O failed 00:29:08.809 Read completed with error (sct=0, sc=8) 00:29:08.809 starting I/O failed 00:29:08.809 Read completed with error (sct=0, sc=8) 00:29:08.809 starting I/O failed 00:29:08.809 Write completed with error (sct=0, sc=8) 00:29:08.809 starting I/O failed 00:29:08.809 [2024-05-15 08:40:55.702443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.809 [2024-05-15 08:40:55.702567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.702723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.702735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.702955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.703080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.703089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.703140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.703288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.703298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.703534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.703651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.703681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.703805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.704010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.704039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.704333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.704481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.704511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.704751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.704954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.704963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.705255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.705387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.705416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.705543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.705779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.705808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.706068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.706311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.706341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.706525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.706731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.706760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.706931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.707187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.707217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.707484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.707663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.707701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.707933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.708086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.708116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.708340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.708521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.708550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.708735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.708967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.708996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.709162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.709395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.709424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.709633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.709901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.709910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.710111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.710287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.710297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.710472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.710608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.710621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.710890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.711096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.711105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.711308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.711433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.711442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.711583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.711660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.711672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.711764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.711831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.711840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-05-15 08:40:55.712081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.712177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-05-15 08:40:55.712187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.712332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.712488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.712498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.712687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.712771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.712781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.712994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.713062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.713089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.713205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.713400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.713430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.713547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.713722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.713752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.714013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.714189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.714220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.714408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.714586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.714614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.714867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.715036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.715046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.715126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.715318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.715327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.715530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.715661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.715669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.715801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.715992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.716001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.716092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.716304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.716314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.716393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.716521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.716530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.716665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.716830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.716840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.717031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.717159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.717172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.717395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.717533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.717543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.717630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.717716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.717726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.717874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.717958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.717967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.718159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.718291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.718301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.718480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.718682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.718692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.718900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.719062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.719071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.719264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.719402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.719412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.719607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.719680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.719690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.719814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.719977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.719987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.720229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.720390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.720400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.720478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.720536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.720546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.720681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.720834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.720843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-05-15 08:40:55.721069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.721203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-05-15 08:40:55.721213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.721422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.721558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.721567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.721705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.721960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.721969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.722184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.722433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.722443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.722586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.722806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.722816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.722964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.723118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.723127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.723196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.723425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.723434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.723526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.723743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.723753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.723991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.724206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.724216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.724433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.724555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.724564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.724724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.724858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.724868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.725059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.725240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.725250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.725478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.725725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.725735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.725893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.726030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.726040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.726243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.726367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.726377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.726534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.726613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.726623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.726849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.727054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.727063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.727212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.727428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.727438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.727626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.727796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.727805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.727969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.728128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.728138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.728344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.728513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.728522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.728666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.728879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.728908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.729146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.729424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.729455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.729682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.729923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.729952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.730157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.730270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.730299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.730577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.730814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.730842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.731069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.731289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.731320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-05-15 08:40:55.731602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-05-15 08:40:55.731763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.731772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.731942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.732064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.732093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.732375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.732608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.732636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.732892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.733146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.733156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.733328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.733488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.733498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.733715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.733790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.733830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.734017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.734293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.734324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.734575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.734807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.734816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.735005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.735190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.735221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.735476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.735705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.735734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.735992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.736239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.736270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.736523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.736728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.736757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.737038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.737291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.737322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.737608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.737859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.737888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.738121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.738373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.738404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.738631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.738838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.738867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.739105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.739267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.739298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.739413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.739576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.739604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.739772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.739969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.739997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.740246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.740425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.740454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.740718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.740938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.740967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.741145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.741381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.741411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.741583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.741765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.741794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.742001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.742215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.742245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.742506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.742755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.742784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.742969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.743160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.743199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.743398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.743591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.743621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.743807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.744053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.744082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-05-15 08:40:55.744319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-05-15 08:40:55.744499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.744528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.744786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.744897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.744926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.745150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.745302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.745333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.745455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.745617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.745645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.745773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.745896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.745905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.745993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.746195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.746226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.746509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.746718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.746747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.746862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.747063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.747092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.747260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.747488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.747517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.747705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.747873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.747902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.748024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.748302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.748332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.748515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.748709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.748738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.748999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.749180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.749210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.749468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.749667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.749696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.749954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.750183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.750213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.750466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.750571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.750580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.750716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.750816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.750845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.751062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.751266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.751298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.751559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.751734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.751763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.751879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.752128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.752137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.752278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.752474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.752502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.752700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.752955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.752984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.753237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.753432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.753461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-05-15 08:40:55.753667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.753921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-05-15 08:40:55.753950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.754067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.754181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.754211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.754448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.754695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.754724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.754932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.755136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.755145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.755375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.755499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.755528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.755780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.755974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.755983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.756125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.756319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.756350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.756608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.756818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.756827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.756956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.757120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.757149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.757361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.757459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.757488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.757753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.757945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.757953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.758190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.758405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.758415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.758606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.758774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.758803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.759068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.759321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.759352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.759553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.759806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.759835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.760062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.760208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.760243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.760496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.760797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.760826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.761020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.761277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.761307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.761438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.761692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.761722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.761994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.762154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.762195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.762377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.762640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.762669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.762834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.763099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.763128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.763330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.763512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.763541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.763729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.763909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.763939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.764143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.764348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.764378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.764568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.764800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.764829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.765001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.765198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.765228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.765340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.765616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.765644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-05-15 08:40:55.765722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.765944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-05-15 08:40:55.765954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.766142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.766281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.766311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.766479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.766657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.766685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.766950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.767126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.767135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.767327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.767541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.767550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.767750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.767978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.767990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.768213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.768483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.768512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.768769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.768911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.768939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.769183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.769453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.769482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.769727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.769906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.769935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.770115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.770338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.770348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.770483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.770670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.770699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.770926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.771223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.771253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.771430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.771688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.771717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.771980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.772155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.772195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.772431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.772703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.772737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.772929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.773183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.773213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.773416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.773528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.773558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.773805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.774039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.774068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.774328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.774606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.774635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.774750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.774886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.774896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.775099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.775361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.775392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.775651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.775906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.775934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.776128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.776259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.776289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.776555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.776783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.776812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.777067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.777325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.777362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.777540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.777702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.777731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.777916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.778163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.778201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-05-15 08:40:55.778371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.778645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-05-15 08:40:55.778674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.778778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.778961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.778990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.779186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.779438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.779467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.779583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.779813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.779843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.780076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.780278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.780308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.780490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.780672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.780700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.780964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.781211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.781241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.781531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.781728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.781739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.781881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.782128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.782158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.782416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.782652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.782682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.782884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.783001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.783037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.783249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.783474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.783483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.783719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.783775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.783784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.783919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.784141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.784178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.784479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.784755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.784785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.784918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.785099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.785129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.785378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.785558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.785588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.785828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.786047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.786076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.786251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.786482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.786512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.786618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.786777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.786806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.787093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.787276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.787306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.787492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.787707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.787736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.788003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.788257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.788287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.788457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.788757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.788786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.789048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.789253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.789283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.789495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.789677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.789705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.789955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.790225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.790256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.790436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.790689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.790719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-05-15 08:40:55.790954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.791100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-05-15 08:40:55.791130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.791420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.791584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.791614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.791784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.792045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.792075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.792245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.792373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.792382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.792542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.792757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.792787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.793071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.793253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.793283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.793542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.793729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.793757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.794044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.794223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.794253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.794422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.794546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.794575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.794749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.794927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.794956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.795147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.795321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.795355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.795559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.795741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.795770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.795949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.796184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.796214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.796433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.796629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.796657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.796838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.796976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.796985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.797204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.797408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.797437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.797631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.797887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.797916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.798181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.798349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.798378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.798558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.798787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.798817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.799074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.799273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.799304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.799590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.799696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.799726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.799991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.800181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.800211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.800405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.800584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.800614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.800814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.801029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.801058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.801295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.801549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-05-15 08:40:55.801579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-05-15 08:40:55.801730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.801982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.802011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.802307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.802396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.802405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.802602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.802839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.802848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.803161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.803387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.803416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.803682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.803950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.803959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.804154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.804358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.804389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.804650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.804830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.804860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.805066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.805242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.805273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.805442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.805673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.805702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.805948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.806195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.806225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.806492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.806677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.806707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.806960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.807201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.807232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.807420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.807697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.807726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.807914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.808104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.808133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.808328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.808526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.808555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.808825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.808982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.809010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.809205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.809453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.809482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.809650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.809906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.809936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.810204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.810391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.810420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.810656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.810913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.810943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.811193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.811268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.811278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.811518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.811777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.811806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.812054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.812249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.812279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.812536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.812796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.812826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.813017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.813249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.813279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.813524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.813754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.813783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.813990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.814211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.814242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.814488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.814669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-05-15 08:40:55.814698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-05-15 08:40:55.814825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.814989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.815018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.815229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.815394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.815424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.815635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.815890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.815919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.816206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.816364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.816393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.816500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.816674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.816703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.816897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.817080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.817089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.817232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.817434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.817444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.817593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.817789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.817799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.818024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.818243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.818253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.818334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.818526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.818536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.818698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.818908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.818918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.819096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.819329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.819343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.819562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.819776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.819786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.819924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.820090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.820100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.820232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.820484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.820514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.820692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.820897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.820931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.821133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.821363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.821374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.821598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.821753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.821762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.822018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.822187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.822218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.822428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.822703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.822736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-05-15 08:40:55.822927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.823084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-05-15 08:40:55.823095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:09.093 [2024-05-15 08:40:55.823319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.093 [2024-05-15 08:40:55.823416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.093 [2024-05-15 08:40:55.823426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.093 qpair failed and we were unable to recover it. 00:29:09.093 [2024-05-15 08:40:55.823577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.093 [2024-05-15 08:40:55.823717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.093 [2024-05-15 08:40:55.823727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.093 qpair failed and we were unable to recover it. 00:29:09.093 [2024-05-15 08:40:55.823798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.093 [2024-05-15 08:40:55.823950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.093 [2024-05-15 08:40:55.823960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.093 qpair failed and we were unable to recover it. 00:29:09.093 [2024-05-15 08:40:55.824105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.093 [2024-05-15 08:40:55.824262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.824272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.824359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.824576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.824586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.824785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.824924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.824934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.825083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.825291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.825323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.825568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.825691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.825720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.825987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.826270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.826300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.826572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.826741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.826770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.826884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.827138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.827177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.827357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.827552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.827562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.827762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.827919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.827929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.828103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.828235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.828266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.828474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.828730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.828759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.828946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.829119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.829147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.829454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.829603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.829612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.829775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.830039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.830068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.830253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.830364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.830394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.830580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.830831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.830861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.831123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.831396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.831426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.831647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.831909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.831938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.832180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.832352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.832382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.832624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.832868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.832907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.833151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.833227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.833237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.833462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.833704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.833733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.833983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.834239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.834270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.834458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.834716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.834745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.834934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.835192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.835222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.835407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.835691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.835720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.094 [2024-05-15 08:40:55.835961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.836150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.094 [2024-05-15 08:40:55.836204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.094 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.836498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.836685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.836713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.836983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.837241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.837272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.837508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.837708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.837736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.837995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.838146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.838186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.838372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.838605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.838635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.838806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.838997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.839031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.839299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.839456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.839465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.839627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.839883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.839913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.840122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.840376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.840407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.840646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.840831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.840860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.841128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.841331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.841341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.841504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.841755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.841784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.842049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.842269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.842299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.842569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.842823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.842853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.843141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.843328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.843358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.843548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.843734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.843768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.844036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.844294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.844309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.844484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.844681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.844691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.844920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.845091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.845103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.845290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.845485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.845495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.845719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.845896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.845926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.846118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.846287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.846297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.846494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.846712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.846722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.846927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.847016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.847058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.847254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.847437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.847467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.847729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.847986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.847998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.848139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.848211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.848221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.095 [2024-05-15 08:40:55.848374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.848605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.095 [2024-05-15 08:40:55.848634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.095 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.848962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.849196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.849226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.849439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.849681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.849710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.849955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.850190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.850220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.850488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.850655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.850684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.850923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.851184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.851214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.851462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.851699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.851728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.851910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.852080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.852110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.852378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.852588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.852623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.852862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.853088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.853117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.853397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.853480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.853506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.853770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.853987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.854016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.854279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.854367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.854376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.854586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.854752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.854781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.854992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.855185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.855215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.855401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.855635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.855664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.855839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.856100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.856129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.856361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.856527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.856556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.856812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.856971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.856980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.857196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.857435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.857465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.857656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.857842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.857871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.858127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.858211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.858220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.858430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.858700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.858728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.858989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.859224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.859255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.859506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.859694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.859723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.859898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.860141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.860193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.860460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.860692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.860721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.860929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.861109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.861139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.096 [2024-05-15 08:40:55.861424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.861508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.096 [2024-05-15 08:40:55.861517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.096 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.861737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.861998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.862027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.862288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.862466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.862495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.862760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.863023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.863051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.863339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.863586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.863596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.863791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.863872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.863881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.864057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.864226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.864257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.864429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.864666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.864695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.864890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.865071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.865100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.865364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.865568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.865597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.865816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.865998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.866028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.866222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.866363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.866391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.866647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.866955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.866989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.867219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.867437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.867446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.867704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.867947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.867956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.868126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.868336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.868367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.868579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.868814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.868843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.869088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.869309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.869319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.869452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.869677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.869706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.869945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.870184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.870215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.870507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.870789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.870818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.871008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.871185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.871195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.871414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.871668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.871697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.871867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.872179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.872210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.872472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.872598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.872607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.872842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.873099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.873128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.873375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.873618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.873648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.873919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.874089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.874118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.097 qpair failed and we were unable to recover it. 00:29:09.097 [2024-05-15 08:40:55.874391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.874629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.097 [2024-05-15 08:40:55.874658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.874907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.875089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.875117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.875315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.875463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.875472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.875630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.875792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.875802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.876023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.876259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.876269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.876491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.876577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.876606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.876820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.876992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.877020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.877312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.877484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.877513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.877701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.877989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.878017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.878203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.878399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.878430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.878705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.878985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.879014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.879280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.879445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.879473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.879733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.879987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.880016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.880260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.880505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.880534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.880777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.880968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.880997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.881186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.881366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.881395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.881654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.881820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.881849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.882108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.882262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.882292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.882480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.882733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.882762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.883022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.883305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.883335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.883598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.883852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.883881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.884139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.884228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.884266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.884506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.884767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.884797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.885051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.885331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.098 [2024-05-15 08:40:55.885362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.098 qpair failed and we were unable to recover it. 00:29:09.098 [2024-05-15 08:40:55.885532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.885769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.885799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.886037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.886263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.886293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.886561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.886728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.886758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.887017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.887270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.887301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.887496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.887659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.887687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.887879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.888139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.888179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.888312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.888505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.888514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.888709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.888932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.888961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.889220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.889442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.889471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.889742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.890025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.890055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.890218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.890440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.890470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.890754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.891013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.891043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.891272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.891367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.891376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.891586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.891776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.891804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.892046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.892180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.892190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.892336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.892483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.892512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.892627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.892936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.892964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.893234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.893427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.893456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.893693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.893960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.893989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.894173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.894333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.894365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.894562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.894747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.894776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.895046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.895215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.895245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.895454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.895733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.895762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.896002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.896239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.896270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.896536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.896784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.896813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.897102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.897381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.897390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.897541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.897652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.897681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.897919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.898180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.898211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.099 qpair failed and we were unable to recover it. 00:29:09.099 [2024-05-15 08:40:55.898481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.099 [2024-05-15 08:40:55.898737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.898766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.898886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.899126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.899155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.899269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.899467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.899477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.899650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.899778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.899798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.899960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.900053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.900062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.900280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.900492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.900522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.900713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.900914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.900938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.901156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.901253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.901283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.901543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.901789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.901818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.902029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.902323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.902364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.902563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.902730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.902759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.903031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.903303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.903314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.903416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.903640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.903670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.903879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.904159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.904214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.904469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.904750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.904779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.904951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.905204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.905234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.905404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.905640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.905669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.905938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.906117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.906147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.906443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.906572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.906581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.906676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.906818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.906840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.906973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.907143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.907182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.907426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.907625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.907659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.907926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.908107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.908116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.908352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.908587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.908616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.908917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.909194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.909224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.909437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.909675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.909704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.909890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.910151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.910190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.910398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.910653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.910682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.100 qpair failed and we were unable to recover it. 00:29:09.100 [2024-05-15 08:40:55.910860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.100 [2024-05-15 08:40:55.911132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.911161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.911406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.911609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.911638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.911891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.912173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.912183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.912412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.912607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.912619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.912845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.913043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.913071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.913200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.913329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.913358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.913552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.913755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.913785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.914065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.914301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.914332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.914581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.914671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.914680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.914836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.915032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.915041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.915206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.915401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.915410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.915504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.915582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.915592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.915749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.915942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.915971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.916246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.916435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.916447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.916611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.916730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.916759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.916979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.917104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.917133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.917401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.917564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.917573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.917737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.917917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.917947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.918058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.918263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.918294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.918557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.918809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.918838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.919132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.919422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.919453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.919648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.919860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.919889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.920080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.920266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.920296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.920534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.920721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.920756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.921026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.921265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.921296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.921542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.921670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.921680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.921906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.922175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.922205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.922462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.922724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.922753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.101 qpair failed and we were unable to recover it. 00:29:09.101 [2024-05-15 08:40:55.923027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.101 [2024-05-15 08:40:55.923193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.923224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.923464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.923573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.923603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.923875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.924077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.924106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.924293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.924576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.924605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.924796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.925046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.925074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.925342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.925557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.925586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.925763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.926068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.926097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.926362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.926527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.926536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.926763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.927047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.927076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.927342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.927549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.927578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.927765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.928021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.928051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.928183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.928422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.928451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.928716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.928865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.928894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.929108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.929324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.929355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.929500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.929682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.929711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.929976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.930236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.930246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.930387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.930552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.930582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.930864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.931131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.931160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.931346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.931608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.931637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.931886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.932064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.932093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.932367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.932612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.932641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.932906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.933154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.933193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.933408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.933632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.933642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.933906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.933998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.934007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.934135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.934352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.934383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.934504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.934707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.934735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.934935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.935142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.935178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.935391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.935655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.935684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.102 qpair failed and we were unable to recover it. 00:29:09.102 [2024-05-15 08:40:55.935954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.102 [2024-05-15 08:40:55.936192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.936223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.936491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.936736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.936765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.937032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.937275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.937305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.937425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.937507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.937516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.937678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.937861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.937890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.938068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.938333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.938364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.938608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.938848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.938877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.939077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.939261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.939291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.939467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.939670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.939699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.940000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.940258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.940290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.940540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.940779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.940809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.941082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.941340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.941371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.941611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.941868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.941897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.942156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.942352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.942382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.942548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.942712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.942722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.942945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.943160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.943175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.943400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.943543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.943572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.943766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.943973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.944001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.944282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.944533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.944562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.944857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.945060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.945089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.945355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.945527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.945557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.945824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.946093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.946122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.946339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.946566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.946596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.946787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.947069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.947098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.947286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.947482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.103 [2024-05-15 08:40:55.947511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.103 qpair failed and we were unable to recover it. 00:29:09.103 [2024-05-15 08:40:55.947806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.947975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.948005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.948141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.948360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.948391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.948569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.948806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.948835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.949148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.949352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.949382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.949629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.949879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.949908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.950182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.950298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.950328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.950511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.950773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.950803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.951061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.951321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.951352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.951564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.951829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.951859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.952065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.952305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.952335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.952576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.952747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.952757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.953054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.953265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.953276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.953450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.953611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.953620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.953801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.954004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.954014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.954107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.954379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.954390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.954528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.954669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.954679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.954850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.955074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.955084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.955325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.955574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.955584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.955738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.955953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.955963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.956125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.956219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.956229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.956435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.956655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.956665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.956832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.956998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.957009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.957154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.957239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.957249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.957412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.957505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.957515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.957746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.957823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.957833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.958045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.958185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.958195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.958330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.958544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.958554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.958809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.959078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.959087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-05-15 08:40:55.959304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-05-15 08:40:55.959545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.959555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.959701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.959931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.959941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.960175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.960353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.960363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.960570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.960722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.960732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.960980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.961195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.961206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.961347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.961544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.961554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.961760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.961959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.961969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.962197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.962396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.962406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.962607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.962842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.962851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.963074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.963217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.963227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.963402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.963553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.963563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.963815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.964053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.964063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.964160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.964314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.964323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.964480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.964549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.964559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.964650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.964872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.964882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.964992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.965219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.965229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.965457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.965685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.965694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.965836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.966082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.966092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.966258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.966455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.966464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.966686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.966909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.966919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.967084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.967305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.967315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.967537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.967713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.967723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.967872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.968093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.968103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.968341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.968510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.968519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.968720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.968814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.968824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.968905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.969013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.969023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.969240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.969445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.969455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.969589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.969757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.969767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-05-15 08:40:55.969962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-05-15 08:40:55.970207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.970218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.970437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.970635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.970645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.970789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.970961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.970970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.971181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.971371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.971381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.971455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.971529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.971539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.971670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.971864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.971874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.972138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.972212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.972222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.972428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.972564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.972573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.972835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.972984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.972994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.973064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.973306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.973316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.973540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.973787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.973797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.973941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.974168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.974178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.974308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.974390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.974399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.974479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.974703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.974712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.974928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.975071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.975080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.975228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.975371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.975381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.975603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.975829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.975838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.976065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.976261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.976274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.976440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.976643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.976652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.976803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.977015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.977025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.977120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.977295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.977305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.977398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.977614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.977624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.977776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.977850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.977859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.977938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.978151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.978161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.978318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.978497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.978507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.978665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.978913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.978923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.979054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.979137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.979146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.979376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.979526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.979538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-05-15 08:40:55.979795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-05-15 08:40:55.979868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.979878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.980019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.980175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.980185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.980407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.980549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.980559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.980797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.981009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.981018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.981258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.981459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.981469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.981686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.981759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.981769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.981912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.982105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.982115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.982267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.982523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.982533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.982686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.982892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.982902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.983038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.983263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.983275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.983475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.983554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.983563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.983710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.983931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.983940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.984082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.984224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.984234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.984433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.984667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.984677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.984894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.985037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.985046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.985266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.985416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.985426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.985575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.985775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.985784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.985982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.986201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.986211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.986357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.986598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.986608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.986826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.987072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.987083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.987306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.987508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.987517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.987601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.987820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.987830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.988044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.988254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.988264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.988468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.988671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.988680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.988896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.988970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-05-15 08:40:55.988979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-05-15 08:40:55.989208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.989300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.989309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.989527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.989700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.989710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.989945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.990150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.990160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.990305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.990432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.990441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.990661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.990900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.990909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.990994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.991079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.991088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.991309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.991505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.991515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.991714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.991858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.991867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.992006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.992157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.992171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.992368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.992516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.992525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.992777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.992997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.993007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.993156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.993386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.993396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.993542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.993773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.993783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.994013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.994093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.994103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.994332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.994461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.994470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.994711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.994874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.994884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.995101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.995340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.995350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.995549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.995763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.995772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.995914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.996159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.996172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.996340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.996566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.996576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.996767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.996968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.996978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.997143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.997371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.997381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.997594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.997824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.997834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.997997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.998221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.998231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.998457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.998583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.998593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.998681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.998826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.998835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.998975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.999104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.999113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.999239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.999380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-05-15 08:40:55.999389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-05-15 08:40:55.999524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:55.999743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:55.999752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:55.999975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.000129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.000139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.000285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.000431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.000440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.000657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.000796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.000805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.000935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.001154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.001174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.001332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.001554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.001563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.001783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.001976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.001985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.002124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.002316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.002326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.002570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.002646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.002655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.002797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.003015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.003024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.003172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.003317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.003327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.003490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.003686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.003697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.003953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.004099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.004109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.004261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.004489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.004499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.004647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.004792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.004802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.004890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.005109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.005119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.005262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.005524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.005535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.005735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.005966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.005977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.006204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.006291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.006301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.006366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.006519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.006529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.006727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.006866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.006876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.007019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.007171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.007181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.007353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.007550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.007560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.007641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.007795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.007806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.008010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.008137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.008147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.008396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.008566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.008576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.008782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.009000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.009010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.009180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.009409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-05-15 08:40:56.009419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-05-15 08:40:56.009572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.009712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.009723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.110 qpair failed and we were unable to recover it. 00:29:09.110 [2024-05-15 08:40:56.009918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.010119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.010129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.110 qpair failed and we were unable to recover it. 00:29:09.110 [2024-05-15 08:40:56.010271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.010423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.010434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.110 qpair failed and we were unable to recover it. 00:29:09.110 [2024-05-15 08:40:56.010647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.010802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.010812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.110 qpair failed and we were unable to recover it. 00:29:09.110 [2024-05-15 08:40:56.010889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.011046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.011056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.110 qpair failed and we were unable to recover it. 00:29:09.110 [2024-05-15 08:40:56.011196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.011427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.011437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.110 qpair failed and we were unable to recover it. 00:29:09.110 [2024-05-15 08:40:56.011516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.011709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.011718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.110 qpair failed and we were unable to recover it. 00:29:09.110 [2024-05-15 08:40:56.011869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.012038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.012047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.110 qpair failed and we were unable to recover it. 00:29:09.110 [2024-05-15 08:40:56.012249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.012343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.012353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.110 qpair failed and we were unable to recover it. 00:29:09.110 [2024-05-15 08:40:56.012452] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e770 is same with the state(5) to be set 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Write completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Write completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Write completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Write completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Write completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 [2024-05-15 08:40:56.012802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Write completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Write completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Write completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Write completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Write completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Write completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Write completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Write completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Write completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Write completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Write completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Write completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 Read completed with error (sct=0, sc=8) 00:29:09.110 starting I/O failed 00:29:09.110 [2024-05-15 08:40:56.013098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.110 [2024-05-15 08:40:56.013297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.013513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.013528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.110 qpair failed and we were unable to recover it. 00:29:09.110 [2024-05-15 08:40:56.013781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.013965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.013974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.110 qpair failed and we were unable to recover it. 00:29:09.110 [2024-05-15 08:40:56.014174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.014264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.110 [2024-05-15 08:40:56.014273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.110 qpair failed and we were unable to recover it. 00:29:09.110 [2024-05-15 08:40:56.014351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.014543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.014551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.014694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.014754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.014763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.014971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.015137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.015146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.015393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.015480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.015489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.015627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.015768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.015777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.015932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.016072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.016080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.016246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.016475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.016487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.016696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.016885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.016894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.017112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.017322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.017332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.017547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.017688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.017698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.017852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.017988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.017998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.018156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.018233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.018242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.018392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.018559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.018569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.018657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.018749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.018758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.018920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.019134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.019144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.019329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.019489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.019499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.019645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.019801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.019813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.019892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.020050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.020060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.020292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.020446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.020456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.020656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.020798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.020808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.020894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.021037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.021046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.021181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.021341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.021351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.021438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.021599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.021609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.021752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.021992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.022004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.022196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.022407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.022418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.022509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.022703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.022713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.022907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.023061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.023071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.023321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.023534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.111 [2024-05-15 08:40:56.023544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.111 qpair failed and we were unable to recover it. 00:29:09.111 [2024-05-15 08:40:56.023668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.023890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.023900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.023993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.024087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.024097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.024221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.024379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.024389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.024619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.024695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.024705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.024867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.024945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.024955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.025104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.025194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.025205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.025444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.025573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.025583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.025813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.025998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.026009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.026249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.026382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.026392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.026554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.026692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.026702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.026774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.026967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.026977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.027128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.027347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.027357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.027504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.027657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.027668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.027755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.027886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.027896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.028046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.028283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.028293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.028438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.028512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.028522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.028714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.028867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.028877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.029022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.029105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.029115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.029252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.029327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.029337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.029463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.029541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.029551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.029798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.029992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.030002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.030163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.030249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.030259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.030424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.030577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.030587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.030718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.030844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.030854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.112 qpair failed and we were unable to recover it. 00:29:09.112 [2024-05-15 08:40:56.031002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.112 [2024-05-15 08:40:56.031223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.031234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.031397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.031483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.031493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.031705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.031918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.031928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.032175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.032249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.032259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.032405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.032599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.032609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.032705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.032776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.032786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.032861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.032939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.032948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.033096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.033228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.033238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.033381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.033533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.033543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.033686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.033768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.033778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.033861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.033987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.033998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.034080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.034223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.034233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.034397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.034467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.034477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.034673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.034753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.034763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.034896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.035023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.035033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.035114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.035182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.035191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.035265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.035340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.035349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.035430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.035485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.035494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.035567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.035693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.035703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.035775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.035846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.035855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.035925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.035985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.035994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.036064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.036133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.036142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.036237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.036308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.036317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.036393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.036536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.036546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.036740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.036805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.036816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.036885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.036938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.036947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.037071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.037154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.037171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.037263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.037319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.037328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.113 qpair failed and we were unable to recover it. 00:29:09.113 [2024-05-15 08:40:56.037388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.113 [2024-05-15 08:40:56.037543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.037554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.037619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.037754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.037765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.037828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.037895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.037905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.038029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.038176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.038186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.038264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.038349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.038360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.038412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.038492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.038501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.038563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.038642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.038652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.038758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.038837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.038853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.039003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.039076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.039091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.039159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.039234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.039247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.039324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.039398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.039412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.039499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.039579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.039592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.039664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.039735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.039750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.039832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.039893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.039907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.040011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.040078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.040092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.040238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.040301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.040315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.040441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.040498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.040508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.040654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.040819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.040833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.040914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.041016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.041031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.041175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.041238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.041252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.041314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.041383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.041395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.041478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.041555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.041568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.041630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.041698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.041712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.041780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.041843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.041852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.041921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.042111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.042121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.042190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.042318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.042328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.042394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.042450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.042460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.042514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.042638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.042648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.114 qpair failed and we were unable to recover it. 00:29:09.114 [2024-05-15 08:40:56.042720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.114 [2024-05-15 08:40:56.042783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.042793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.042916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.042971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.042980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.043120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.043246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.043256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.043319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.043439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.043449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.043519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.043636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.043646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.043793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.043986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.043996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.044077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.044139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.044149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.044215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.044281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.044291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.044364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.044418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.044428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.044499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.044571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.044580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.044667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.044796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.044806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.044865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.044933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.044943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.045072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.045142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.045152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.045238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.045293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.045303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.045372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.045516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.045526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.045653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.045802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.045812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.045893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.046021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.046030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.046090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.046174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.046184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.046248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.046317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.046326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.046385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.046540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.046550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.046618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.046685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.046695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.046840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.046974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.046983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.047045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.047099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.047108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.047249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.047325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.047335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.047410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.047597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.047607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.047735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.047946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.047956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.048098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.048155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.048178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.048253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.048377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.048387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.115 qpair failed and we were unable to recover it. 00:29:09.115 [2024-05-15 08:40:56.048446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.115 [2024-05-15 08:40:56.048513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.048523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.048643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.048726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.048738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.048804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.048944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.048954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.049038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.049100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.049109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.049251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.049329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.049339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.049394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.049462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.049472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.049528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.049651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.049660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.049718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.049852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.049861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.049934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.050027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.050036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.050170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.050325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.050335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.050395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.050470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.050480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.050549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.050700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.050713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.050782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.050914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.050923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.050985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.051075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.051084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.051147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.051223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.051233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.051290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.051379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.051389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.051544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.051630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.051639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.051700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.051781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.051790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.051925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.051987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.051996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.052072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.052149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.052159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.052231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.052301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.052310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.052361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.052416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.052427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.052483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.052540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.052550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.052614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.052736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.052745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.052805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.052862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.052872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.052942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.053007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.053016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.053135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.053204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.053214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.053272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.053340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.053349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.116 qpair failed and we were unable to recover it. 00:29:09.116 [2024-05-15 08:40:56.053474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.053616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.116 [2024-05-15 08:40:56.053625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.053775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.053832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.053841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.053978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.054035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.054044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.054105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.054156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.054173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.054228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.054293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.054303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.054366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.054417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.054427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.054498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.054555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.054565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.054622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.054686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.054696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.054764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.054836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.054845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.054973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.055041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.055051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.055182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.055253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.055262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.055322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.055380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.055389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.055455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.055579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.055588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.055717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.055779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.055788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.055850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.055915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.055925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.055983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.056059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.056068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.056127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.056194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.056203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.056346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.056402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.056411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.056546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.056613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.056622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.056812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.056881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.056890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.056966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.057090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.057100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.057231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.057289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.057299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.057350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.057405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.057414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.057485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.057546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.057556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.057609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.057674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.117 [2024-05-15 08:40:56.057684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.117 qpair failed and we were unable to recover it. 00:29:09.117 [2024-05-15 08:40:56.057763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.057820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.057830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.057883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.057938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.057947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.058005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.058118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.058127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.058187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.058246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.058255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.058309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.058456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.058466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.058541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.058675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.058684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.058747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.058910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.058919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.058992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.059131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.059140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.059204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.059272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.059281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.059351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.059407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.059416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.059539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.059681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.059690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.059781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.059849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.059858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.059994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.060187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.060197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.060266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.060390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.060400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.060547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.060602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.060611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.060682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.060750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.060760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.060920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.061054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.061064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.061236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.061304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.061313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.061377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.061501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.061510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.061649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.061724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.061734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.061799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.061869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.061878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.061955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.062025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.062034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.062173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.062312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.062321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.062391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.062453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.062463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.062530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.062591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.062600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.062733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.062861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.062870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.062943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.063083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.063092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.063155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.063240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.118 [2024-05-15 08:40:56.063250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.118 qpair failed and we were unable to recover it. 00:29:09.118 [2024-05-15 08:40:56.063446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.063512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.063522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.063658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.063725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.063734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.063890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.064034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.064044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.064178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.064319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.064328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.064495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.064563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.064572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.064627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.064705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.064714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.064776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.064848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.064858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.064915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.065037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.065046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.065121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.065195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.065205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.065344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.065493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.065503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.065568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.065709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.065718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.065845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.065915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.065924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.066050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.066111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.066121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.066281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.066413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.066422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.066508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.066632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.066641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.066724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.066801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.066811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.067002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.067089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.067098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.067227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.067306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.067315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.067441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.067564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.067574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.067814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.067951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.067961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.068028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.068102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.068111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.068190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.068340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.068353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.068486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.068615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.068627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.068704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.068772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.068785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.068917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.068991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.069004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.069155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.069239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.069253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.069327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.069413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.069425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.069516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.069590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.119 [2024-05-15 08:40:56.069603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.119 qpair failed and we were unable to recover it. 00:29:09.119 [2024-05-15 08:40:56.069681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.069739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.069751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.069817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.069949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.069963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.070094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.070171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.070180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.070257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.070381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.070390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.070471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.070599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.070608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.070735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.070819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.070829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.070904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.070973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.070982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.071053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.071126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.071137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.071200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.071276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.071285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.071350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.071542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.071552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.071695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.071765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.071775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.071970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.072038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.072047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.072109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.072177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.072188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.072269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.072411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.072420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.072482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.072547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.072557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.072640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.072716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.072725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.072797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.072860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.072870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.072979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.073034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.073044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.073106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.073157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.073175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.073254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.073426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.073435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.073502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.073556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.073565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.073622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.073683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.073692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.073821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.073876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.073886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.074010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.074082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.074091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.074151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.074215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.074225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.074284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.074356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.074365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.074426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.074547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.074557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.074624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.074694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.120 [2024-05-15 08:40:56.074704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.120 qpair failed and we were unable to recover it. 00:29:09.120 [2024-05-15 08:40:56.074838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.074900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.074910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.074974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.075045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.075054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.075109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.075300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.075310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.075379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.075444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.075453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.075517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.075640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.075650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.075711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.075835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.075845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.075967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.076046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.076055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.076132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.076276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.076286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.076359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.076431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.076441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.076496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.076571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.076581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.076713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.076788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.076797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.076930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.077054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.077063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.077128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.077197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.077207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.077264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.077328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.077337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.077392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.077531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.077540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.077748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.077892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.077901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.077971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.078032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.078041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.078177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.078250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.078260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.078328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.078452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.078462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.078594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.078652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.078661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.078782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.078850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.078860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.079008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.079134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.079144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.079271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.079347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.079357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.079481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.079538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.079548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.121 [2024-05-15 08:40:56.079614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.079683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.121 [2024-05-15 08:40:56.079692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.121 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.079748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.079816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.079829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.079960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.080017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.080026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.080089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.080150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.080159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.080301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.080430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.080440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.080518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.080637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.080647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.080715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.080770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.080779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.080851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.080976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.080985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.081111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.081312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.081322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.081396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.081531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.081540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.081597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.081788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.081798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.081868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.082009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.082021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.082152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.082217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.082227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.082297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.082366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.082376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.082446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.082572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.082582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.082772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.082824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.082833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.082914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.082988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.082997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.083076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.083135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.083145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.083223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.083280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.083289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.083370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.083428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.083437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.083585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.083708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.083717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.083792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.083920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.083931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.084082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.084150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.084159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.084305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.084364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.084374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.084461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.084537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.084546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.084606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.084660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.084669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.084742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.084869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.084879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.084956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.085010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.085020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.085144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.085213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.085223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.122 [2024-05-15 08:40:56.085289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.085358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.122 [2024-05-15 08:40:56.085367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.122 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.085442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.085568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.085578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.085636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.085692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.085703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.085778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.085849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.085858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.085931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.085986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.085995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.086127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.086182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.086192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.086248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.086306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.086315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.086378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.086440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.086449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.086505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.086577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.086587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.086714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.086782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.086791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.086850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.086917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.086926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.087008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.087140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.087150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.087306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.087363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.087372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.087430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.087502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.087512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.087568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.087696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.087705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.087781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.087914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.087923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.088048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.088109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.088119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.088188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.088314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.088324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.088389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.088457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.088466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.088612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.088697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.088706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.088781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.088837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.088846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.088971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.089026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.089035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.089158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.089219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.089229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.089371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.089457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.089466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.089526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.089604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.089613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.089679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.089754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.089763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.089835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.089902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.089911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.090064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.090128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.090148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.090303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.090436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.090445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.090513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.090598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.090607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.090731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.090797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.090806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.123 qpair failed and we were unable to recover it. 00:29:09.123 [2024-05-15 08:40:56.090931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.123 [2024-05-15 08:40:56.091001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.091011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.091086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.091162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.091183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.091242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.091314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.091323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.091450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.091597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.091607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.091666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.091727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.091736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.091864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.091946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.091955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.092081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.092149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.092158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.092234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.092296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.092305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.092368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.092489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.092498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.092560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.092632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.092641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.092831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.092889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.092898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.092970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.093025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.093034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.093172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.093247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.093256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.093386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.093462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.093472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.093544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.093628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.093637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.093815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.093941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.093950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.094004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.094058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.094067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.094189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.094254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.094263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.094320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.094381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.094389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.094447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.094567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.094575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.094638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.094706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.094714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.094837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.094913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.094922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.095112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.095187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.095196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.095257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.095319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.095328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.095389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.095515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.095524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.095662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.095808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.095817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.095881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.095940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.095948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.096081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.096225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.096234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.096290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.096356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.096365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.096417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.096496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.096505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.124 qpair failed and we were unable to recover it. 00:29:09.124 [2024-05-15 08:40:56.096585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.096645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.124 [2024-05-15 08:40:56.096654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.096723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.096852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.096866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.096948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.097022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.097031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.097115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.097173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.097182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.097253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.097309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.097319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.097381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.097438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.097446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.097514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.097669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.097677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.097822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.097902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.097910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.097968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.098023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.098032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.098109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.098172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.098181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.098248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.098369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.098378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.098441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.098497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.098506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.098589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.098668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.098680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.098762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.098844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.098857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.098925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.098992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.099003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.099143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.099218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.099231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.099303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.099384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.099397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.099462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.099557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.099569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.099700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.099764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.099775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.099857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.100015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.100026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.100113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.100176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.100191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.100256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.100326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.100338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.125 [2024-05-15 08:40:56.100438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.100507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.125 [2024-05-15 08:40:56.100519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.125 qpair failed and we were unable to recover it. 00:29:09.383 [2024-05-15 08:40:56.100730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-05-15 08:40:56.100879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-05-15 08:40:56.100890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-05-15 08:40:56.100956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-05-15 08:40:56.101119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-05-15 08:40:56.101131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-05-15 08:40:56.101199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-05-15 08:40:56.101271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-05-15 08:40:56.101283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-05-15 08:40:56.101444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-05-15 08:40:56.101571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-05-15 08:40:56.101583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-05-15 08:40:56.101720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-05-15 08:40:56.101848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-05-15 08:40:56.101860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-05-15 08:40:56.102035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-05-15 08:40:56.102260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.102273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.102413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.102500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.102513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.102664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.102812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.102825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.102907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.103105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.103117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.103196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.103346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.103359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.103453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.103545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.103557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.103639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.103816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.103835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.103974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.104102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.104114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.104261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.104400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.104412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.104492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.104647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.104659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.104809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.104947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.104958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.105157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.105295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.105307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.105387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.105587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.105600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.105758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.105907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.105920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.106016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.106095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.106108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.106178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.106250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.106262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.106353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.106482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.106493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.106574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.106636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.106648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.106784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.106917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.106930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.106997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.107148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.107161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.107298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.107461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.107474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.107566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.107640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.107652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.107722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.107888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.107901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.108091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.108174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.108187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.108282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.108355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.108367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.108567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.108656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.108669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.108739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.108819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.108831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-05-15 08:40:56.108977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.109128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-05-15 08:40:56.109140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.109285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.109351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.109364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.109439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.109505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.109517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.109742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.109882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.109894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.110056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.110118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.110130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.110274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.110351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.110363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.110428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.110560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.110578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.110774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.110907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.110919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.111053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.111251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.111263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.111364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.111439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.111450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.111524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.111668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.111681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.111765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.111834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.111845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.112041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.112126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.112138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.112230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.112287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.112299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.112431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.112562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.112574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.112762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.112852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.112864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.113085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.113238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.113251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.113476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.113553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.113567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.113700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.113828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.113840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.113921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.114008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.114021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.114111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.114179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.114191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.114259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.114415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.114427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-05-15 08:40:56.114505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-05-15 08:40:56.114584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.669 [2024-05-15 08:40:56.434979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.669 qpair failed and we were unable to recover it. 00:29:09.669 [2024-05-15 08:40:56.435313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.669 [2024-05-15 08:40:56.435441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.669 [2024-05-15 08:40:56.435455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.669 qpair failed and we were unable to recover it. 00:29:09.669 [2024-05-15 08:40:56.435614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.669 [2024-05-15 08:40:56.435760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.669 [2024-05-15 08:40:56.435772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.669 qpair failed and we were unable to recover it. 00:29:09.669 [2024-05-15 08:40:56.435879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.669 [2024-05-15 08:40:56.436011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.669 [2024-05-15 08:40:56.436023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.669 qpair failed and we were unable to recover it. 00:29:09.669 [2024-05-15 08:40:56.436205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.436287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.436299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.436404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.436481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.436497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.436590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.436684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.436697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.436787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.436849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.436860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.436948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.437102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.437113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.437215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.437345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.437356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.437435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.437517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.437530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.437627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.437711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.437724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.437806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.437911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.437924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.438059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.438129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.438141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.438243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.438334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.438347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.438429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.438578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.438594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.438672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.438753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.438767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.438847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.438907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.438920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.439006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.439075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.439088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.439161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.439311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.439325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.439417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.439488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.439502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.439580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.439716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.439728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.439815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.439888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.439902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.439990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.440072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.440084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.440155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.440244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.440257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.440407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.440474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.440486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.440560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.440640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.440653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.440738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.440819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.440832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.670 qpair failed and we were unable to recover it. 00:29:09.670 [2024-05-15 08:40:56.440971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.441035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.670 [2024-05-15 08:40:56.441047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.441114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.441190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.441205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.441291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.441367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.441380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.441454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.441531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.441544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.441684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.441751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.441763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.441831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.441918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.441931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.442106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.442174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.442187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.442258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.442347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.442360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.442439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.442521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.442535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.442695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.442765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.442779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.442845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.442917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.442929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.443062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.443125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.443138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.443209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.443275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.443289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.443374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.443501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.443514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.443590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.443740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.443753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.443831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.443907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.443919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.443988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.444062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.444075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.444162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.444234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.444246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.444316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.444465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.444478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.444572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.444628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.444640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.444711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.444780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.444793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.444871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.444960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.444973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.445055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.445128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.445141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.445298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.445377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.445390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.445461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.445547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.445560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.445696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.445765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.445778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.445919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.445978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.445990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.671 qpair failed and we were unable to recover it. 00:29:09.671 [2024-05-15 08:40:56.446060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.446143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.671 [2024-05-15 08:40:56.446156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.446245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.446320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.446333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.446465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.446536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.446549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.446616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.446689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.446701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.446773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.446837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.446849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.446936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.447022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.447035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.447101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.447175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.447190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.447330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.447411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.447425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.447504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.447584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.447596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.447666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.447805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.447818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.447969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.448043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.448055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.448204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.448289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.448302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.448374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.448539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.448552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.448627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.448691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.448704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.448800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.448868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.448880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.448958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.449090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.449103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.449174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.449235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.449248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.449386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.449455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.449468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.449535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.449611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.449623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.449769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.449902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.449914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.449984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.450060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.450072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.450156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.450260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.450275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.450414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.450485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.450498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.450659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.450743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.450756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.450824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.450952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.450964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.672 [2024-05-15 08:40:56.451098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.451178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.672 [2024-05-15 08:40:56.451190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.672 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.451260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.451331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.451344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.451518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.451589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.451601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.451674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.451809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.451823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.451897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.451969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.451981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.452056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.452118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.452131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.452220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.452290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.452303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.452386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.452449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.452461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.452533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.452727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.452740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.452807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.452954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.452967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.453036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.453184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.453198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.453272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.453343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.453355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.453431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.453508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.453521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.453585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.453653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.453667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.453735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.453806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.453818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.453917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.453988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.454001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.454121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.454288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.454306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.454400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.454549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.454562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.454646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.454704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.454715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.454789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.454851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.454860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.454924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.454997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.455007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.455063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.455122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.455131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.455198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.455265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.455276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.455351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.455427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.455437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.455499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.455564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.455573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.455704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.455757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.455767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.455839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.455915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.455925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.455989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.456080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.673 [2024-05-15 08:40:56.456090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.673 qpair failed and we were unable to recover it. 00:29:09.673 [2024-05-15 08:40:56.456155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.456236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.456247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.456318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.456395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.456404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.456487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.456560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.456570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.456632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.456698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.456708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.456770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.456836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.456848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.456924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.456990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.457000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.457056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.457107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.457117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.457195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.457323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.457333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.457396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.457484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.457494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.457629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.457697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.457707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.457843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.457901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.457912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.457989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.458049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.458058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.458120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.458202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.458212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.458269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.458334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.458343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.458400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.458453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.458462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.458513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.458569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.458579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.458636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.458753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.458763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.458840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.458913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.458922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.458977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.459052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.459062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.459122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.459184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.459194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.459261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.459319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.459329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.459402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.459470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.459479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.674 qpair failed and we were unable to recover it. 00:29:09.674 [2024-05-15 08:40:56.459555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.674 [2024-05-15 08:40:56.459608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.459618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.459675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.459747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.459757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.459839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.459911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.459921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.460109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.460177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.460187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.460246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.460316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.460326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.460452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.460510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.460520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.460594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.460668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.460678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.460737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.460887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.460897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.460961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.461104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.461113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.461200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.461266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.461275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.461406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.461472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.461481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.461549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.461677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.461687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.461755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.461811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.461821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.461895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.461954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.461964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.462048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.462104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.462114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.462186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.462253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.462263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.462337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.462393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.462407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.462482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.462546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.462556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.462614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.462679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.462688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.462757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.462825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.462835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.462895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.462952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.462962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.463021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.463075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.463085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.463144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.463208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.463219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.463276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.463352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.463362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.463427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.463553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.463563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.463628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.463683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.463693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.463754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.463824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.463836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.463899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.463957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.675 [2024-05-15 08:40:56.463967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.675 qpair failed and we were unable to recover it. 00:29:09.675 [2024-05-15 08:40:56.464022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.464097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.464107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.464177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.464237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.464247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.464322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.464378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.464387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.464522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.464577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.464587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.464645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.464717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.464727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.464785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.464850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.464860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.464922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.464997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.465007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.465063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.465132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.465142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.465203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.465269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.465281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.465339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.465398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.465408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.465479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.465606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.465615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.465690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.465762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.465772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.465836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.465888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.465898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.465960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.466016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.466026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.466089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.466145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.466155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.466215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.466381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.466391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.466465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.466551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.466561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.466635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.466695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.466704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.466835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.466892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.466903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.466971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.467030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.467040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.467107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.467173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.467183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.467242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.467299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.467308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.467382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.467444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.467453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.467511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.467582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.467592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.467759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.467820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.467829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.468002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.468060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.468070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.468191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.468320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.468330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.676 [2024-05-15 08:40:56.468405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.468548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.676 [2024-05-15 08:40:56.468558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.676 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.468624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.468683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.468692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.468769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.468827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.468836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.468963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.469084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.469094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.469156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.469219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.469229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.469288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.469345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.469355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.469413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.469478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.469488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.469567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.469700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.469710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.469835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.469899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.469908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.469980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.470035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.470045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.470114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.470183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.470193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.470333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.470395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.470404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.470461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.470519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.470528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.470586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.470644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.470654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.470719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.470792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.470802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.470933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.470993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.471002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.471073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.471260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.471271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.471333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.471397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.471406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.471502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.471634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.471644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.471716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.471785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.471794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.471863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.471920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.471930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.471989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.472050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.472060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.472207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.472265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.472275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.472336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.472401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.472411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.472475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.472544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.472553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.472692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.472763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.472774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.472903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.472966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.472975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.473109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.473183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.473193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.677 [2024-05-15 08:40:56.473272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.473412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.677 [2024-05-15 08:40:56.473422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.677 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.473489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.473545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.473554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.473624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.473684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.473693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.473768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.473832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.473842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.473969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.474028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.474038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.474099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.474155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.474169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.474234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.474356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.474366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.474428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.474500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.474522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.474602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.474676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.474685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.474848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.474906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.474916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.474992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.475069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.475078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.475149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.475291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.475301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.475433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.475495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.475504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.475628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.475684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.475694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.475768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.475853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.475862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.475919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.475973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.475982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.476111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.476191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.476201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.476265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.476325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.476335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.476416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.476476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.476486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.476549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.476609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.476619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.476675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.476739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.476749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.476811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.476967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.476977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.477055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.477126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.477135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.477204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.477277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.477287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.477361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.477425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.477434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.477510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.477563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.477572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.477629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.477696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.477705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.477766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.477829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.477839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.678 qpair failed and we were unable to recover it. 00:29:09.678 [2024-05-15 08:40:56.477902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.678 [2024-05-15 08:40:56.477958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.477968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.478026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.478097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.478107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.478170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.478226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.478235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.478307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.478430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.478440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.478499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.478573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.478583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.478647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.478707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.478716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.478779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.478852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.478861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.478918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.478971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.478980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.479056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.479114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.479123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.479177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.479233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.479243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.479320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.479385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.479394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.479463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.479522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.479531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.479596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.479657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.479667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.479726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.479786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.479796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.479851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.479914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.479924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.479984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.480040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.480049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.480179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.480236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.480245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.480373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.480495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.480505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.480576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.480649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.480658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.480806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.480935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.480945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.481001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.481128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.481138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.481204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.481263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.481273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.481331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.481403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.481412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-05-15 08:40:56.481486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.481555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.679 [2024-05-15 08:40:56.481565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.481647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.481713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.481723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.481778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.481854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.481863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.481929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.481989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.481998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.482063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.482196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.482206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.482330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.482386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.482395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.482455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.482530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.482540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.482611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.482689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.482699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.482773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.482841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.482851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.482907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.482965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.482974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.483049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.483102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.483111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.483244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.483318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.483328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.483395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.483455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.483465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.483536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.483607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.483617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.483683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.483748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.483757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.483883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.483941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.483951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.484005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.484068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.484077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.484140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.484200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.484210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.484277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.484333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.484342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.484396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.484451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.484461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.484520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.484649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.484659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.484725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.484787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.484797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.484952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.485010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.485019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.485092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.485172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.485182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.485242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.485312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.485322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.485385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.485442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.485451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.485519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.485653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.485663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.485719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.485787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.485797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-05-15 08:40:56.485857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.485922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.680 [2024-05-15 08:40:56.485931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.486005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.486070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.486080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.486148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.486276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.486286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.486344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.486401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.486410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.486467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.486531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.486542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.486609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.486683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.486693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.486754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.486808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.486818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.486877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.487012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.487022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.487081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.487160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.487174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.487247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.487310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.487320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.487378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.487440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.487450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.487515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.487571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.487580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.487638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.487698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.487707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.487848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.487910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.487919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.487983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.488040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.488050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.488101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.488160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.488178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.488235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.488288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.488298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.488352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.488418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.488428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.488482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.488550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.488559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.488627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.488681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.488690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.488749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.488814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.488823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.488881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.488935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.488944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.489006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.489065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.489075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.489136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.489201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.489211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.489268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.489325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.489335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.489400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.489454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.489466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-05-15 08:40:56.489528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.489583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.681 [2024-05-15 08:40:56.489592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.489669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.489723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.489733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.489859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.489982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.489992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.490056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.490133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.490143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.490214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.490271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.490282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.490351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.490418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.490428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.490502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.490564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.490573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.490636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.490722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.490732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.490792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.490853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.490862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.490985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.491049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.491060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.491121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.491184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.491194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.491332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.491396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.491405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.491471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.491528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.491537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.491602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.491660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.491670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.491734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.491859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.491869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.492037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.492107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.492117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.492191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.492253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.492262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.492320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.492395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.492404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.492467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.492519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.492528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.492590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.492658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.492669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.492730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.492783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.492792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.492849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.492912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.492921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.493046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.493101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.493111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.493180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.493252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.493261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.493325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.493385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.493395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.493449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.493577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.493586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.493655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.493710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.493719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.493794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.493859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.493869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.682 [2024-05-15 08:40:56.493947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.494005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.682 [2024-05-15 08:40:56.494014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.682 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.494082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.494142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.494151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.494235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.494293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.494302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.494366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.494427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.494436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.494560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.494620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.494629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.494763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.494819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.494828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.494950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.495007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.495017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.495075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.495132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.495142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.495205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.495264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.495274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.495336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.495462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.495472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.495531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.495582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.495591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.495650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.495716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.495726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.495792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.495855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.495865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.495921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.495973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.495983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.496036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.496103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.496114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.496173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.496239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.496248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.496336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.496411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.496421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.496481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.496537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.496547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.496601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.496669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.496678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.496804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.496875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.496884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.496947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.497018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.497028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.497082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.497135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.497144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.497207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.497268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.497277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.497333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.497398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.497408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.497473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.497529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.497538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.497594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.497651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.497661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.683 qpair failed and we were unable to recover it. 00:29:09.683 [2024-05-15 08:40:56.497723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.683 [2024-05-15 08:40:56.500252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.500264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.500330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.500468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.500478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.500608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.500689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.500699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.500757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.500830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.500840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.500901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.500953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.500963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.501038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.501173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.501183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.501256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.501316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.501325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.501385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.501441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.501450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.501505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.501567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.501576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.501698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.501770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.501780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.501848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.501904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.501913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.501976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.502032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.502041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.502110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.502171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.502181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.502254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.502310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.502320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.502378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.502434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.502443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.502527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.502582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.502591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.502656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.502730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.502740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.502805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.502859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.502869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.502990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.503045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.503054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.503121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.503217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.503227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.503305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.503368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.503377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.503453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.503509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.503519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.503582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.503651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.684 [2024-05-15 08:40:56.503661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.684 qpair failed and we were unable to recover it. 00:29:09.684 [2024-05-15 08:40:56.503722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.503785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.503796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.503847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.503905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.503915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.503979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.504035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.504045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.504102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.504159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.504174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.504236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.504300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.504310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.504448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.504509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.504518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.504586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.504648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.504657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.504734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.504857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.504867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.504993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.505051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.505061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.505122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.505186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.505196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.505260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.505322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.505332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.505398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.505469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.505479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.505604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.505672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.505682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.505743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.505799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.505808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.505889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.505951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.505962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.506036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.506096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.506106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.506172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.506251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.506261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.506321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.506377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.506387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.506541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.506593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.506603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.506665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.506722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.506732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.506815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.506873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.506882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.506940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.506996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.507005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.507070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.507137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.507146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.507211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.507275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.507285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.507355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.507420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.507429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.685 [2024-05-15 08:40:56.507484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.507626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.685 [2024-05-15 08:40:56.507636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.685 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.507700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.507760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.507770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.507893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.507950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.507960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.508019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.508076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.508086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.508156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.508224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.508233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.508298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.508361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.508370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.508449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.508504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.508513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.508581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.508640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.508649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.508709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.508764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.508774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.508828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.508884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.508894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.508954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.509009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.509018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.509142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.509215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.509225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.509293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.509353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.509363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.509425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.509484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.509494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.509555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.509623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.509633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.509693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.509747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.509756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.509814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.509871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.509881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.510006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.510061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.510070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.510147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.510216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.510226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.510284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.510340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.510349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.510421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.510482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.510491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.510555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.510616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.510625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.510691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.510746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.510756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.510896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.510951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.510960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.511018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.511140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.511150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.511222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.511283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.511293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.511350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.511404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.686 [2024-05-15 08:40:56.511413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.686 qpair failed and we were unable to recover it. 00:29:09.686 [2024-05-15 08:40:56.511558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.511609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.511619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.511682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.511752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.511761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.511826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.511894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.511904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.511981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.512041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.512050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.512117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.512180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.512190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.512244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.512302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.512311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.512373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.512447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.512456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.512513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.512568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.512577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.512631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.512690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.512700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.512767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.512908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.512918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.512978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.513101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.513110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.513172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.513256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.513267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.513327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.513385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.513394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.513454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.513511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.513521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.513578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.513633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.513643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.513703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.513776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.513786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.513854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.513917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.513927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.513994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.514056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.514066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.514122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.514258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.514268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.514327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.514397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.514406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.514466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.514536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.514546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.514605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.514665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.514676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.514761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.514817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.514827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.514884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.514988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.514998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.515054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.515116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.515125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.515189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.515252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.515262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.515323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.515379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.515389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.687 qpair failed and we were unable to recover it. 00:29:09.687 [2024-05-15 08:40:56.515444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.687 [2024-05-15 08:40:56.515509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.515518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.515573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.515638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.515647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.515717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.515772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.515781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.515841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.515907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.515917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.515976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.516033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.516044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.516099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.516153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.516163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.516247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.516324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.516333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.516393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.516447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.516458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.516517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.516570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.516580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.516639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.516695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.516705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.516776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.516894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.516904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.516971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.517039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.517049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.517110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.517176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.517186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.517248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.517313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.517322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.517383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.517447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.517459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.517519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.517577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.517587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.517713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.517776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.517785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.517931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.517988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.517997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.518054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.518121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.518131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.518195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.518251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.518261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.518321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.518379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.518388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.518443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.518502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.518512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.518571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.518630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.518639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.518695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.518755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.518765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.518826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.518881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.518890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.518946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.519002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.519011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.519139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.519208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.519218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.519280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.519334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.519343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.688 qpair failed and we were unable to recover it. 00:29:09.688 [2024-05-15 08:40:56.519490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.688 [2024-05-15 08:40:56.519543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.519552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.519614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.519672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.519681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.519757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.519889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.519898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.519956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.520015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.520024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.520081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.520138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.520147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.520223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.520276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.520286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.520353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.520480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.520489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.520563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.520686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.520696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.520763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.520826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.520836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.520903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.520960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.520969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.521027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.521081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.521090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.521159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.521226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.521236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.521365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.521422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.521431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.521500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.521560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.521569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.521629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.521781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.521790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.521857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.521913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.521923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.522049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.522105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.522115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.522178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.522237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.522247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.522381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.522455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.522464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.522533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.522591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.522601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.522672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.522753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.522762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.522889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.522960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.522970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.523031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.523156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.523170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.689 qpair failed and we were unable to recover it. 00:29:09.689 [2024-05-15 08:40:56.523229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.689 [2024-05-15 08:40:56.523284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.523293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.523416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.523475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.523484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.523540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.523616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.523625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.523683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.523739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.523748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.523803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.523857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.523866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.523922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.523975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.523984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.524049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.524135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.524144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.524223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.524290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.524299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.524365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.524456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.524465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.524544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.524598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.524608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.524673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.524738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.524747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.524810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.524864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.524874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.525000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.525065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.525074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.525135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.525194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.525204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.525268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.525326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.525335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.525400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.525468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.525478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.525539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.525594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.525604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.525669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.525727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.525736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.525860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.525924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.525933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.525985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.526057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.526067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.526125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.526187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.526196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.526268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.526323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.526333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.526393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.526451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.526461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.526512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.526572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.526581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.526657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.526709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.526718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.526779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.526841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.526851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.526909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.527047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.527057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.527115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.527181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.527190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.690 qpair failed and we were unable to recover it. 00:29:09.690 [2024-05-15 08:40:56.527265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.690 [2024-05-15 08:40:56.527403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.527412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.527561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.527613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.527623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.527681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.527804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.527813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.527876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.527936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.527945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.528088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.528146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.528156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.528223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.528282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.528292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.528365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.528496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.528506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.528576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.528770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.528779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.528949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.529020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.529030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.529086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.529148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.529157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.529237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.529359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.529368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.529507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.529589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.529618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.529729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.529890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.529920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.530040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.530142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.530183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.530301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.530417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.530446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.530566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.530662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.530691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.530917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.531058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.531094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.531335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.531514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.531531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.531619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.531778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.531809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.531922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.532020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.532050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.532282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.532392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.532420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.532597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.532804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.532833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.533002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.533072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.533085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.533187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.533262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.533275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.533434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.533677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.533706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.533811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.533912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.533941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.534068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.534247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.534276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.534454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.534582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-05-15 08:40:56.534612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-05-15 08:40:56.534784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.534891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.534904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.535035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.535120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.535134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.535279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.535431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.535461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.535575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.535829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.535857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.535968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.536044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.536057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.536202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.536284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.536297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.536372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.536453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.536465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.536595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.536730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.536744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.536820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.536892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.536905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.536992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.537132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.537145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.537319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.537517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.537547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.537665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.537780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.537808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.537899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.538055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.538069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.538244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.538351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.538379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.538610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.538769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.538797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.538914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.539084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.539097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.539254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.539428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.539456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.539574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.539746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.539776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.539879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.540012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.540025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.540100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.540174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.540187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.540253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.540340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.540353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.540440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.540578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.540592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.540789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.540856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.540868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.540931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.541061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.541074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.541158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.541307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.541320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.541463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.541533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.541546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.541617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.541700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.541713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-05-15 08:40:56.541777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.541865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-05-15 08:40:56.541877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.542021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.542154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.542172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.542244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.542326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.542339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.542438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.542572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.542601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.542810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.542904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.542932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.543031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.543205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.543236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.543348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.543453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.543481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.543582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.543716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.543729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.543802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.543864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.543877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.543959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.544016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.544029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.544162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.544244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.544257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.544326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.544404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.544419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.544507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.544580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.544593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.544723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.544782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.544794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.544859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.544938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.544950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.545027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.545158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.545176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.545251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.545315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.545327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.545396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.545470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.545483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.545549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.545617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.545629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.545716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.545865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.545879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.546013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.546083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.546096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.546258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.546347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.546363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.546435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.546498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.546509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.546660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.546727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.546739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.546806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.546872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.546884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.546959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.547029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.547042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.547178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.547311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.547325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-05-15 08:40:56.547463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.547530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-05-15 08:40:56.547543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.547678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.547756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.547769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.547830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.547914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.547927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.548009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.548079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.548091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.548153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.548313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.548328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.548399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.548464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.548476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.548548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.548639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.548653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.548726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.548788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.548800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.548881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.549029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.549041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.549123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.549257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.549270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.549412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.549544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.549557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.549627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.549754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.549768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.549842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.549922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.549935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.550007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.550065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.550077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.550144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.550233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.550249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.550386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.550464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.550478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.550547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.550611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.550623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.550764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.550828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.550841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.550924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.550999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.551011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.551152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.551227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.551241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.551339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.551407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.551420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.551486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.551635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.551648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.551731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.551794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.551806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.551875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.551944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.551957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.694 [2024-05-15 08:40:56.552043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.552121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.694 [2024-05-15 08:40:56.552134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.694 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.552209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.552284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.552297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.552373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.552518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.552532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.552598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.552754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.552768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.552833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.552899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.552910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.552976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.553113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.553127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.553202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.553294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.553308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.553393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.553461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.553474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.553543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.553619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.553632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.553770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.553834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.553847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.553942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.554006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.554024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.554159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.554230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.554243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.554309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.554377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.554389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.554531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.554597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.554609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.554744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.554803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.554817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.554882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.554943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.554955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.555019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.555080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.555093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.555227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.555354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.555369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.555456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.555597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.555610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.555681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.555808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.555820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.555888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.555962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.555975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.556123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.556198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.556211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.556292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.556373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.556387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.556453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.556523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.556535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.556602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.556771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.556784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.556882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.556969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.556982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.557065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.557198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.557213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.557297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.557448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.557462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.695 qpair failed and we were unable to recover it. 00:29:09.695 [2024-05-15 08:40:56.557529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.557593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.695 [2024-05-15 08:40:56.557606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.557668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.557743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.557756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.557837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.557904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.557928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.558079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.558147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.558160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.558384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.558467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.558480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.558569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.558635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.558648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.558748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.558810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.558822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.558913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.558981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.558993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.559080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.559147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.559160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.559302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.559374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.559386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.559472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.559536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.559548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.559612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.559747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.559761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.559833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.559895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.559909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.559984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.560049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.560062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.560136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.560222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.560236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.560315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.560443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.560457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.560522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.560591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.560603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.560669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.560757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.560770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.560924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.561087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.561115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.561232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.561345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.561374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.561543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.561643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.561671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.561782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.561959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.561989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.562090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.562153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.562170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.562322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.562419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.562432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.562503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.562643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.562656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.562816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.562941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.562969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.563139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.563326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.563356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.563454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.563548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.563577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.696 qpair failed and we were unable to recover it. 00:29:09.696 [2024-05-15 08:40:56.563691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.696 [2024-05-15 08:40:56.563787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.563816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.563935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.564031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.564045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.564241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.564321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.564334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.564399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.564469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.564481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.564576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.564655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.564668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.564740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.564811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.564825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.564892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.564970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.564983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.565057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.565137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.565151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.565251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.565371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.565387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.565487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.565562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.565576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.565649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.565711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.565723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.565819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.565885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.565898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.565983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.566117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.566131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.566212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.566282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.566295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.566371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.566444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.566457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.566615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.566779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.566797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.566887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.566965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.566979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.567200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.567293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.567307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.567386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.567458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.567471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.567557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.567696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.567709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.567806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.567943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.567956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.568040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.568114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.568127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.568212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.568280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.568293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.568383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.568466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.568479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.568553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.568626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.568639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.568710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.568791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.568804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.568946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.569012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.569025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.569101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.569176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.569190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.697 qpair failed and we were unable to recover it. 00:29:09.697 [2024-05-15 08:40:56.569323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.697 [2024-05-15 08:40:56.569457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.569470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.569537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.569613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.569626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.569710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.569783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.569796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.569950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.570011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.570025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.570118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.570196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.570209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.570362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.570489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.570502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.570634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.570718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.570732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.570816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.570944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.570960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.571096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.571183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.571197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.571279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.571352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.571365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.571444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.571516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.571529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.571606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.571681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.571693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.571754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.571916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.571946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.572050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.572158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.572201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.572307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.572405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.572434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.572612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.572700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.572713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.572785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.572947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.572976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.573087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.573198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.573229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.573343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.573439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.573468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.573574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.573700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.573729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.573896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.574058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.574088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.574201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.574299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.574328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.574462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.574597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.574610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.574692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.574769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.574782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.698 qpair failed and we were unable to recover it. 00:29:09.698 [2024-05-15 08:40:56.574864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.698 [2024-05-15 08:40:56.575008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.575021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.575173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.575239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.575253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.575456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.575599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.575612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.575691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.575757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.575770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.575856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.575925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.575938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.576028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.576093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.576106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.576177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.576309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.576322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.576483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.576547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.576559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.576645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.576733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.576746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.576835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.576906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.576920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.576991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.577060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.577073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.577141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.577226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.577240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.577319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.577393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.577406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.577608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.577677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.577690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.577765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.577836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.577849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.577984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.578056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.578069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.578133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.578212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.578226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.578381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.578460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.578472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.578555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.578621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.578634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.578780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.578847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.578860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.578996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.579056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.579068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.579135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.579272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.579288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.579375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.579444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.579457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.579524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.579601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.579613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.579785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.579939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.579978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.580091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.580204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.580237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.580362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.580526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.580556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.580704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.580853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.580863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.699 qpair failed and we were unable to recover it. 00:29:09.699 [2024-05-15 08:40:56.581023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.699 [2024-05-15 08:40:56.581095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.581105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.581233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.581294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.581304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.581428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.581484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.581495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.581566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.581696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.581706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.581764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.581834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.581844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.581927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.582005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.582015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.582067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.582128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.582137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.582219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.582287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.582297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.582362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.582418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.582428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.582505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.582581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.582591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.582649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.582709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.582719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.582782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.582863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.582874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.582944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.583002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.583011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.583080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.583141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.583150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.583280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.583347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.583357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.583430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.583562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.583572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.583694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.583762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.583772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.583916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.583985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.583994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.584057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.584121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.584131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.584195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.584251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.584260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.584322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.584443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.584453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.584524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.584593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.584604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.584754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.584885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.584894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.584971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.585047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.585057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.585120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.585187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.585197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.585260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.585329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.585339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.585401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.585460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.585470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.700 [2024-05-15 08:40:56.585603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.585662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.700 [2024-05-15 08:40:56.585671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.700 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.585740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.585817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.585826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.585892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.585965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.585975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.586116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.586203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.586213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.586286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.586423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.586433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.586495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.586556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.586565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.586624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.586684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.586693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.586753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.586876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.586886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.586947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.587001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.587010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.587076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.587138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.587148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.587209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.587266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.587276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.587334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.587405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.587415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.587470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.587526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.587536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.587588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.587676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.587686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.587743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.587865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.587875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.587938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.588060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.588070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.588141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.588209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.588219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.588301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.588382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.588391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.588450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.588511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.588521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.588580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.588639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.588650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.588726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.588782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.588791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.588851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.588988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.588998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.589067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.589126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.589136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.589209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.589284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.589293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.589366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.589420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.589430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.589489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.589567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.589576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.589636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.589708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.589717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.589771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.589895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.589906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.590041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.590174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.590185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.590246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.590302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.590314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.701 qpair failed and we were unable to recover it. 00:29:09.701 [2024-05-15 08:40:56.590377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.701 [2024-05-15 08:40:56.590453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.590462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.590533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.590594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.590605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.590668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.590754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.590764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.590842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.590908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.590917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.591043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.591119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.591129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.591199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.591258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.591268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.591331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.591389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.591400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.591457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.591521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.591532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.591591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.591730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.591741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.591800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.591873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.591883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.591939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.592026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.592036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.592099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.592155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.592170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.592246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.592304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.592314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.592373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.592424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.592434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.592492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.592556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.592567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.592627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.592685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.592695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.592763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.592887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.592897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.592952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.593077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.593088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.593144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.593211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.593221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.593281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.593341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.593351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.593410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.593469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.593479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.593544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.593620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.593630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.593696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.593751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.593761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.593819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.593890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.593899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.594031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.594089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.594099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.594237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.594316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.594326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.594385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.594546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.594556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.594681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.594881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.594910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.595010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.595179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.595189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.595248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.595311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.595321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.702 qpair failed and we were unable to recover it. 00:29:09.702 [2024-05-15 08:40:56.595407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.702 [2024-05-15 08:40:56.595556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.595565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.595692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.595760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.595770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.595831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.595898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.595908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.595977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.596041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.596051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.596115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.596173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.596184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.596244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.596309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.596319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.596375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.596505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.596515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.596572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.596626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.596635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.596699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.596763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.596773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.596860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.596919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.596928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.597081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.597138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.597147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.597236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.597294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.597303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.597390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.597453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.597464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.597537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.597658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.597667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.597728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.597794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.597803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.597862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.597922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.597932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.597999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.598057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.598067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.598151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.598321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.598332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.598394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.598476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.598485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.598548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.598612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.598622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.598748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.598806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.598816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.598874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.598934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.598944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.598997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.599071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.599082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.703 [2024-05-15 08:40:56.599207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.599277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.703 [2024-05-15 08:40:56.599286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.703 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.599349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.599417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.599426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.599496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.599623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.599633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.599691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.599742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.599751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.599814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.599868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.599878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.599934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.600008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.600017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.600082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.600211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.600222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.600278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.600401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.600411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.600473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.600544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.600553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.600628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.600691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.600700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.600756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.600818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.600827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.600887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.600956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.600965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.601034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.601095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.601105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.601161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.601220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.601229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.601291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.601360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.601370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.601425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.601480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.601489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.601565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.601640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.601650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.601712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.601772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.601781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.601845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.601915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.601925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.601983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.602041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.602051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.602129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.602199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.602208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.602268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.602341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.602350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.602410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.602470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.602479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.602535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.602595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.602604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.602674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.602797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.602806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.602863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.602923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.602933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.602990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.603043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.603053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.603116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.603178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.603188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.603246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.603303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.603312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.603385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.603502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.603512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.704 [2024-05-15 08:40:56.603570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.603625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.704 [2024-05-15 08:40:56.603634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.704 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.603689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.603753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.603762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.603821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.603879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.603888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.603945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.604015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.604024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.604082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.604134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.604143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.604201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.604261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.604270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.604330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.604453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.604463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.604523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.604578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.604587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.604652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.604710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.604720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.604779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.604855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.604865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.604997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.605057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.605066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.605156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.605215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.605225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.605290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.605344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.605353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.605411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.605465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.605476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.605534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.605605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.605614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.605673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.605743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.605753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.605885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.605940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.605949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.606043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.606122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.606131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.606205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.606265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.606275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.606335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.606397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.606406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.606534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.606595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.606604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.606674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.606731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.606740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.606798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.606939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.606948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.607004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.607063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.607072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.607132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.607266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.607277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.607403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.607460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.607469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.607526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.607588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.607596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.607726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.607853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.607863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.608003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.608060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.608070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.608154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.608222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.608231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.705 [2024-05-15 08:40:56.608290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.608352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.705 [2024-05-15 08:40:56.608362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.705 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.608490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.608550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.608575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.608746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.608859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.608889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.608993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.609082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.609110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.609170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.609239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.609249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.609318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.609387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.609396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.609469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.609614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.609624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.609695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.609898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.609908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.609983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.610105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.610116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.610181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.610256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.610265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.610430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.610525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.610554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.610655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.610824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.610853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.610965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.611055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.611064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.611125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.611279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.611309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.611415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.611521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.611549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.611659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.611757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.611785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.611963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.612035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.612044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.612106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.612179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.612190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.612258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.612315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.612324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.612380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.612468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.612478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.612606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.612667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.612676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.612754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.612829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.612838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.612892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.612949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.612959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.613014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.613078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.613087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.613163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.613227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.613237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.613296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.613428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.613437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.613506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.613579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.613588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.613675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.613738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.613750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.613810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.613863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.613872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.614000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.614078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.614087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.706 [2024-05-15 08:40:56.614142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.614224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.706 [2024-05-15 08:40:56.614234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.706 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.614302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.614360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.614369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.614426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.614481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.614490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.614559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.614616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.614626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.614686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.614744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.614754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.614832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.614895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.614904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.614958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.615029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.615039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.615113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.615173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.615184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.615245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.615306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.615315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.615368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.615427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.615437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.615564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.615624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.615634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.615695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.615750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.615759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.615828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.615888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.615898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.615958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.616023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.616033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.616088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.616163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.616177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.616241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.616295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.616304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.616369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.616425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.616434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.616500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.616582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.616593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.616654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.616719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.616729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.616794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.616848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.616857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.616924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.616977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.616987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.617048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.617102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.617112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.617193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.617258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.617268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.617389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.617446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.617456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.617528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.617592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.617602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.617732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.617901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.617930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-05-15 08:40:56.618038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.618152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.707 [2024-05-15 08:40:56.618262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.618400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.618574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.618608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.618706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.618878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.618908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.618999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.619064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.619074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.619137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.619198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.619208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.619271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.619391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.619401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.619472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.619527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.619536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.619603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.619732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.619741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.619806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.619864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.619873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.619929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.619998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.620007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.620085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.620145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.620154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.620231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.620301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.620310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.620374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.620432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.620441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.620508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.620582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.620592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.620660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.620719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.620729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.620922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.621041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.621051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.621112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.621176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.621186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.621284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.621348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.621358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.621484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.621616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.621626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.621715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.621769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.621778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.621914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.622072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.622101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.622211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.622312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.622340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.622457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.622553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.622581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.622698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.622803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.622812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.622948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.623016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.623025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.623097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.623171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.623182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.623248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.623329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.623338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.623405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.623530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.623540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.623669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.623745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.623754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.623811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.623869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.623878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-05-15 08:40:56.624099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.708 [2024-05-15 08:40:56.624177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.624188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.624261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.624339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.624349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.624409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.624476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.624486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.624568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.624717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.624727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.624797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.624863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.624872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.624953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.625089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.625099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.625228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.625286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.625295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.625373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.625526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.625556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.625670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.625771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.625801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.625908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.626000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.626010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.626068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.626132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.626140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.626213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.626276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.626286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.626347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.626417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.626427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.626552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.626615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.626624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.626700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.626757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.626766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.626831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.626886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.626896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.626959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.627027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.627037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.627114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.627173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.627183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.627240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.627360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.627370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.627432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.627502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.627512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.627576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.627647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.627656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.627714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.627838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.627848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.627909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.627970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.627979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.628042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.628101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.628110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.628233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.628293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.628302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.628374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.628436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.628445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.628501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.628626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.628636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.628742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.628841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.628871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.628981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.629074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.629105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.629211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.629309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.629337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-05-15 08:40:56.629545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.629770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.709 [2024-05-15 08:40:56.629800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.629898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.630011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.630040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.630159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.630403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.630433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.630609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.630725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.630752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.630865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.630936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.630946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.631011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.631090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.631099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.631225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.631300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.631309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.631447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.631555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.631585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.631686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.631794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.631822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.631947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.632039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.632049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.632115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.632253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.632264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.632327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.632412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.632421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.632550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.632620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.632629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.632713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.632772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.632781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.632841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.632909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.632918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.632977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.633050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.633059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.633122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.633196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.633207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.633331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.633390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.633399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.633469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.633539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.633549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.633610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.633666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.633675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.633745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.633875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.633885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.633966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.634022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.634031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.634102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.634186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.634201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.634274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.634343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.634357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.634491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.634579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.634592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.634662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.634730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.634743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.634876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.634983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.635012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.635111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.635208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.635238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.635358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.635465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.635494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.635608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.635701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.635729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.710 qpair failed and we were unable to recover it. 00:29:09.710 [2024-05-15 08:40:56.635825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.635960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.710 [2024-05-15 08:40:56.635973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.636043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.636105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.636118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.636304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.636433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.636462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.636572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.636741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.636770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.636881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.637049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.637063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.637140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.637206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.637219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.637299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.637431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.637444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.637510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.637712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.637725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.637787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.637932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.637945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.638082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.638221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.638253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.638437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.638531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.638561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.638732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.638859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.638872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.638940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.639093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.639106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.639188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.639277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.639291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.639437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.639507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.639520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.639604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.639679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.639692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.639769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.639845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.639858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.640009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.640201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.640232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.640334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.640428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.640458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.640567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.640731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.640760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.640858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.640970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.640984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.641051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.641198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.641211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.641350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.641420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.641433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.641499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.641629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.641642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.641714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.641784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.641815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.641918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.642080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.642109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.642299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.642393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.642422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.642535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.642698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.642727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.642904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.643068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.643097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.643221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.643317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.643347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-05-15 08:40:56.643449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.643547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-05-15 08:40:56.643576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.643674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.643859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.643888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.644061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.644168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.644182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.644251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.644314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.644327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.644460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.644594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.644607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.644691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.644826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.644839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.644988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.645158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.645195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.645296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.645415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.645444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.645541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.645714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.645743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.645921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.646020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.646049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.646151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.646247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.646261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.646473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.646543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.646557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.646808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.646903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.646937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.647064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.647173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.647187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.647272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.647334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.647347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.647421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.647494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.647507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.647665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.647736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.647749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.647828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.647899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.647912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.648057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.648196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.648210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.648277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.648343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.648357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.648424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.648513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.648526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.648589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.648652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.648665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.648748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.648820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.648836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.648920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.648989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.649003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.649067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.649202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.649216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.649359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.649522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.649552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.649715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.649814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-05-15 08:40:56.649844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-05-15 08:40:56.649951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.650027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.650040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.650140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.650209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.650226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.650296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.650357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.650370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.650500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.650585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.650598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.650736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.650877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.650890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.650973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.651054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.651069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.651202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.651344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.651358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.651556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.651641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.651654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.651922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.652016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.652045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.652145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.652259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.652272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.652339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.652487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.652501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.652637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.652710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.652723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.652816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.652912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.652925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.652993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.653158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.653176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.653255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.653344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.653357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.653434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.653559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.653575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.653640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.653704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.653717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.653782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.653909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.653923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.653999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.654161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.654180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.654318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.654451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.654464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.654544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.654739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.654752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.654817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.654890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.654903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.655035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.655113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.655126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.655262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.655353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.655366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.655451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.655514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.655528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.655591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.655739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.655768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.655876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.655976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.656005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.656190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.656294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.656307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.656447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.656540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.656554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.656789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.657026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.657056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-05-15 08:40:56.657219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-05-15 08:40:56.657321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.657334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.657414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.657498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.657511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.657644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.657719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.657732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.657804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.657938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.657951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.658008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.658081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.658095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.658177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.658252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.658265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.658427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.658510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.658523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.658617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.658687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.658700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.658783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.658934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.658948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.659023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.659103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.659116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.659196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.659295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.659309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.659378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.659525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.659539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.659621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.659693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.659707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.659846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.660012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.660026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.660114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.660176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.660190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.660255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.660451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.660464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.660595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.660685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.660698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.660768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.660919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.660932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.661096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.661195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.661226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.661331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.661505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.661533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.661642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.661736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.661765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.661931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.662020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.662034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.662171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.662239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.662252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.662326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.662454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.662467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.662618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.662767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.662781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.662913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.662997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.663011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.663076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.663320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.663334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.663426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.663502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.663515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.663675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.663794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.663823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.664023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.664186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.664216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-05-15 08:40:56.664472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.664578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-05-15 08:40:56.664608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.715 [2024-05-15 08:40:56.664785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-05-15 08:40:56.664961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-05-15 08:40:56.664975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-05-15 08:40:56.665044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-05-15 08:40:56.665212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-05-15 08:40:56.665225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-05-15 08:40:56.665359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-05-15 08:40:56.665568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-05-15 08:40:56.665597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-05-15 08:40:56.665767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-05-15 08:40:56.665941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-05-15 08:40:56.665970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-05-15 08:40:56.666143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-05-15 08:40:56.666253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-05-15 08:40:56.666267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-05-15 08:40:56.666404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-05-15 08:40:56.666569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-05-15 08:40:56.666583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.666729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.666863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.666876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.667046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.667111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.667125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.667261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.667349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.667362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.667430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.667520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.667533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.667614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.667690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.667703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.667848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.667981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.667994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.668140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.668242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.668256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.668349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.668588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.668601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.668743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.668889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.668902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.669076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.669223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.669237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.669378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.669518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.669531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.669677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.669824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.669837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.670013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.670210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.670224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.670439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.670506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.670519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.670593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.670663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.670676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.670758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.670923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.670936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.671068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.671146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.671159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.671236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.671316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.671329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.671460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.671545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.671558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.671635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.671714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.671727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.671878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.672013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.672026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.672180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.672250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.672264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.672414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.672554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.672567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.672647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.672792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.672821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.672923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.673043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.673072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.673263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.673380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.673393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:09.999 qpair failed and we were unable to recover it. 00:29:09.999 [2024-05-15 08:40:56.673611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.673863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.999 [2024-05-15 08:40:56.673893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.673995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.674106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.674135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.674324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.674399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.674412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.674491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.674661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.674674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.674744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.674887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.674900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.674983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.675114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.675127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.675206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.675354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.675367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.675469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.675545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.675558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.675688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.675755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.675769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.675917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.676083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.676112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.676235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.676352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.676381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.676567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.676785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.676815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.676940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.677116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.677145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.677335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.677410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.677423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.677627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.677779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.677808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.677928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.678024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.678054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.678299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.678447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.678461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.678618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.678817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.678830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.678908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.679038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.679051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.679134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.679344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.679358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.679449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.679534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.679548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.679625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.679706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.679719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.679781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.679869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.679883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.680127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.680213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.680227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.680450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.680616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.680629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.680724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.680854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.680868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.681004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.681087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.681100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.681272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.681519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.681532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.000 qpair failed and we were unable to recover it. 00:29:10.000 [2024-05-15 08:40:56.681665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.681758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.000 [2024-05-15 08:40:56.681772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.681847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.681977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.681990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.682139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.682226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.682240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.682466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.682593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.682606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.682751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.682822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.682836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.682901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.683101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.683114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.683249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.683332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.683346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.683545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.683620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.683634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.683710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.683857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.683871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.683938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.684084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.684098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.684185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.684270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.684283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.684357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.684443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.684456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.684598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.684661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.684675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.684753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.684890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.684903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.685044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.685210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.685224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.685311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.685513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.685529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.685612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.685758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.685771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.685863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.686058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.686071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.686285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.686366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.686379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.686618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.686701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.686714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.686860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.687024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.687037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.687296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.687442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.687455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.687539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.687670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.687683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.687819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.687969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.687982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.688064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.688207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.688221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.688354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.688428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.688444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.688592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.688665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.688679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.688770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.688940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.688954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.001 qpair failed and we were unable to recover it. 00:29:10.001 [2024-05-15 08:40:56.689095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.001 [2024-05-15 08:40:56.689261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.689275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.689361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.689442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.689455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.689616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.689830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.689844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.689920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.690001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.690015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.690094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.690294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.690308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.690440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.690518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.690531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.690615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.690695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.690708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.690771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.690901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.690917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.691071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.691154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.691173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.691380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.691532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.691545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.691628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.691710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.691723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.691798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.691930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.691943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.692100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.692180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.692194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.692328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.692401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.692414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.692500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.692592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.692605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.692752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.692898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.692911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.693043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.693177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.693192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.693287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.693434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.693449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.693614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.693773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.693786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.693918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.694080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.694094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.694231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.694379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.694392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.694590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.694724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.694737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.694817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.695009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.695022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.695171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.695255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.695268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.695344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.695506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.695519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.695669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.695742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.695755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.695977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.696059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.696073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.696145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.696301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-05-15 08:40:56.696315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-05-15 08:40:56.696397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.696497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.696510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.696660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.696737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.696750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.696908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.697040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.697053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.697193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.697339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.697352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.697497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.697571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.697584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.697660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.697724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.697737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.697871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.697948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.697962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.698028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.698116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.698129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.698259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.698337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.698350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.698484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.698547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.698560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.698649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.698718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.698732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.698946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.699078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.699091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.699246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.699387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.699400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.699466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.699604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.699617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.699704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.699845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.699858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.700007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.700096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.700110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.700322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.700456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.700469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.700637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.700717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.700730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.700861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.701005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.701019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.701106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.701175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.701189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.701273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.701478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.701491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.701640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.701719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-05-15 08:40:56.701732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-05-15 08:40:56.701877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.701957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.701970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.702072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.702293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.702307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.702377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.702462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.702474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.702617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.702677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.702689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.702889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.702964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.702976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.703114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.703202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.703215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.703384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.703464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.703477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.703615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.703757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.703770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.703850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.703927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.703940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.704139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.704216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.704230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.704313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.704404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.704417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.704492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.704564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.704577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.704720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.704860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.704873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.704955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.705035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.705048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.705179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.705251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.705265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.705409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.705544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.705558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.705778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.705855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.705869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.705942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.706006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.706018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.706100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.706186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.706200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.706270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.706415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.706429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.706512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.706574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.706587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.706807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.706888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.706901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.707060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.707203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.707217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.707289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.707368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.707381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.707457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.707531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.707545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.707626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.707773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.707786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.707865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.707928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.707942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.708008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.708132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-05-15 08:40:56.708145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-05-15 08:40:56.708245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.708322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.708336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.708406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.708546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.708560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.708638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.708698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.708711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.708774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.708847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.708860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.709018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.709149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.709162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.709312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.709384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.709397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.709490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.709568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.709581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.709729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.709886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.709899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.710109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.710179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.710193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.710391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.710473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.710486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.710740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.710893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.710910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.711021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.711108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.711120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.711303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.711385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.711395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.711453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.711579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.711589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.711725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.711869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.711879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.711945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.712015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.712024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.712159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.712248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.712258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.712324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.712381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.712389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.712449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.712568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.712578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.712633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.712799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.712809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.712881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.713028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.713043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.713129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.713206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.713220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.713394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.713483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.713497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.713589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.713667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.713681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.713762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.713906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.713920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.714071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.714150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.714168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.714236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.714302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.714318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-05-15 08:40:56.714383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.714577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-05-15 08:40:56.714590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.714663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.714727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.714740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.714872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.715070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.715084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.715231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.715316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.715330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.715406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.715486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.715499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.715586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.715648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.715661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.715725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.715820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.715833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.715984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.716127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.716140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.716214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.716298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.716311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.716395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.716471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.716484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.716572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.716657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.716670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.716753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.716897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.716911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.717074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.717204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.717218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.717307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.717380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.717396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.717461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.717544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.717558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.717712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.717851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.717865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.717941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.718073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.718087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.718167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.718234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.718248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.718407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.718475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.718488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.718575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.718724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.718738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.718801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.718875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.718888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.719091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.719232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.719245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.719446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.719587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.719600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.719740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.719816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.719830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.719907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.720043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.720056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.720141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.720281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.720295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.720423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.720504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.720517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.720603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.720685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.720698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.720777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.720852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-05-15 08:40:56.720865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-05-15 08:40:56.720939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.721091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.721104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.721245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.721441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.721455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.721534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.721602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.721615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.721704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.721778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.721791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.721870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.722070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.722083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.722169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.722232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.722246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.722383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.722531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.722544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.722692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.722836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.722850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.722938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.723066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.723079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.723174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.723243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.723257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.723340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.723412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.723425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.723507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.723576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.723590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.723746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.723833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.723846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.723928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.724070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.724084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.724157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.724234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.724248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.724329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.724527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.724536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.724610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.724817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.724827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.724959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.725085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.725095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.725170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.725236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.725245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.725383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.725454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.725464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.725526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.725652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.725662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.725785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.725851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.725860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.725942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.726028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.726037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.726183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.726306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.726315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.726390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.726525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.726535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.726643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.726808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.726824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.726905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.727041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.727054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.727199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.727272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.727285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-05-15 08:40:56.727481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-05-15 08:40:56.727680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.727693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.727785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.727846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.727859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.727997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.728094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.728107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.728204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.728339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.728353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.728429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.728566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.728579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.728660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.728729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.728742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.728879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.728952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.728966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.729100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.729177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.729191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.729348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.729481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.729494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.729645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.729715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.729728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.729824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.729888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.729901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.729971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.730105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.730118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.730317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.730380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.730389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.730546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.730624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.730634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.730706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.730769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.730778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.730854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.731012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.731023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.731108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.731180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.731191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.731420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.731496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.731506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.731637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.731772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.731783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.731940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.732065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.732075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.732152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.732222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.732232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.732288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.732417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.732426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.732554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.732684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.732693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-05-15 08:40:56.732819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-05-15 08:40:56.732956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.732966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.733041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.733107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.733116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.733310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.733442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.733452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.733576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.733649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.733658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.733782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.733925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.733935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.734123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.734179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.734189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.734263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.734481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.734491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.734617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.734676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.734686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.734757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.734884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.734894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.734962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.735018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.735027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.735085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.735145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.735155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.735328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.735402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.735415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.735494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.735635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.735648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.735791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.735941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.735954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.736033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.736109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.736122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.736190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.736252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.736265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.736420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.736602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.736616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.736689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.736780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.736793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.736875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.736943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.736956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.737038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.737177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.737190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.737263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.737388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.737401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.737555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.737764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.737777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.737849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.737910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.737922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.738011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.738142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.738156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.738235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.738384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.738400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.738609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.738678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.738692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.738787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.738879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.738891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.738963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.739104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.739118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-05-15 08:40:56.739250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-05-15 08:40:56.739316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.739328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.739395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.739460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.739473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.739607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.739687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.739700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.739856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.739919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.739932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.740010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.740095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.740108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.740181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.740247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.740260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.740334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.740416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.740431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.740571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.740651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.740663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.740833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.740966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.740979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.741043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.741125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.741138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.741269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.741349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.741361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.741492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.741696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.741709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.741784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.741866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.741879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.741979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.742048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.742061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.742131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.742206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.742219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.742301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.742376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.742389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.742476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.742546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.742562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.742636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.742762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.742775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.742851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.742916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.742928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.743089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.743177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.743190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.743267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.743349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.743362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.743450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.743631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.743644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.743778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.743915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.743928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.744094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.744178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.744191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.744263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.744407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.744420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.744562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.744639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.744653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.744739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.744823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.744838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.744970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.745056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.745069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-05-15 08:40:56.745227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-05-15 08:40:56.745292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.745304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.745441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.745524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.745537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.745613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.745677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.745690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.745884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.745981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.745994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.746192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.746328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.746340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.746409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.746482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.746495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.746559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.746624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.746636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.746703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.746760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.746773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.746856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.746927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.746941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.747094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.747174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.747188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.747267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.747338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.747351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.747418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.747547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.747561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.747708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.747772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.747784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.747876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.748021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.748033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.748182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.748324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.748338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.748439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.748520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.748533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.748674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.748752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.748764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.748829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.748890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.748903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.749032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.749119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.749132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.749340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.749482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.749495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.749637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.749833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.749847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.749989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.750144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.750157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.750370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.750510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.750522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.750674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.750741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.750753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.750904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.750983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.750995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.751141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.751288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.751301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.751385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.751558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.751570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.751706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.751791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.751804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-05-15 08:40:56.752040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-05-15 08:40:56.752209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.752222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.752309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.752442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.752455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.752619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.752706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.752719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.752816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.752953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.752968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.753103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.753174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.753187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.753261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.753430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.753443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.753587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.753665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.753678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.753819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.753885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.753898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.753964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.754029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.754041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.754177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.754270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.754283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.754366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.754510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.754524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.754606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.754746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.754759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.754844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.754919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.754932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.754996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.755173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.755187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.755276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.755407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.755420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.755499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.755573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.755587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.755671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.755764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.755777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.755912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.755983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.755996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.756069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.756132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.756145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.756247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.756398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.756411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.756512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.756649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.756661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.756736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.756800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.756813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.756876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.756956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.756969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.757049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.757127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.757140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.757348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.757488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.757501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.757674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.757759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.757773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.757915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.758055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.758068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.758148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.758390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.758404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-05-15 08:40:56.758599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-05-15 08:40:56.758740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-05-15 08:40:56.758753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-05-15 08:40:56.758835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-05-15 08:40:56.758989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-05-15 08:40:56.759003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-05-15 08:40:56.759155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-05-15 08:40:56.759288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-05-15 08:40:56.759302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-05-15 08:40:56.759438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-05-15 08:40:56.759593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.759607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-05-15 08:40:56.759686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.759759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.759773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-05-15 08:40:56.759862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.759934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.759946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-05-15 08:40:56.760024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.760096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.760109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-05-15 08:40:56.760177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.760398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.760411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-05-15 08:40:56.760559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.760692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.760704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-05-15 08:40:56.760902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.760991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.761004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-05-15 08:40:56.761078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.761274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.761287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-05-15 08:40:56.761435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.761508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.761521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-05-15 08:40:56.761607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.761691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.761704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-05-15 08:40:56.761789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.761930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.761941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-05-15 08:40:56.762101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.762158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.762171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-05-15 08:40:56.762298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.762388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.762397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-05-15 08:40:56.762554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.762634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.762643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-05-15 08:40:56.762718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.762792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.762802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-05-15 08:40:56.762867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.762938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-05-15 08:40:56.762948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.763108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.763193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.763203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.763305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.763439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.763449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.763573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.763694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.763704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.763767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.763856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.763866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.764079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.764242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.764257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.764343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.764425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.764438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.764595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.764676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.764689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.764848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.764923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.764937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.765017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.765110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.765123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.765261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.765353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.765367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.765454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.765590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.765603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.765672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.765818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.765830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.765909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.766046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.766059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.766260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.766356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.766370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.766440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.766509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.766522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.766602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.766685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.766699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.766861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.766940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.766954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.767038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.767117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.767130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.767211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.767305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.767319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.767487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.767570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.767583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.767661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.767733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.767746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.767834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.767898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.767911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.767980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.768107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.768121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.768193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.768276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.768289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.768374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.768436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.768452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.768678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.768769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.768782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.768858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.768948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.768962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-05-15 08:40:56.769113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.769259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-05-15 08:40:56.769273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.769341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.769426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.769439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.769507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.769702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.769715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.769849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.769931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.769945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.770014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.770160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.770178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.770315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.770451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.770465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.770549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.770687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.770700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.770772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.770834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.770847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.770929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.771065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.771078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.771219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.771399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.771413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.771558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.771690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.771703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.771845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.771916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.771929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.772078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.772159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.772178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.772324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.772555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.772568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.772717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.772793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.772807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.772887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.772967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.772981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.773050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.773210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.773224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.773359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.773489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.773503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.773666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.773734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.773747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.773829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.773979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.773992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.774196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.774323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.774337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.774414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.774556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.774570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.774769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.774865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.774878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.775042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.775117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.775131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.775286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.775417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.775430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.775569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.775711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.775724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.775877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.776027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.776040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.776108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.776240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.776253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.776319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.776464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-05-15 08:40:56.776473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-05-15 08:40:56.776617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.776777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.776787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.776952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.777021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.777031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.777160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.777246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.777255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.777394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.777524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.777533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.777592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.777785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.777794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.777856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.777976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.777985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.778078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.778222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.778232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.778303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.778390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.778400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.778562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.778642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.778652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.778808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.778967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.778977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.779046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.779130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.779140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.779271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.779396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.779406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.779565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.779630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.779639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.779777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.779844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.779853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.779908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.780038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.780047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.780243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.780383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.780393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.780465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.780532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.780541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.780714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.780853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.780862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.780946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.781022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.781032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.781090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.781162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.781176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.781301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.781356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.781366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.781489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.781561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.781570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.781639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.781728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.781739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.781796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.781925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.781934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.781993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.782056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.782064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.782221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.782363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.782372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.782453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.782599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.782610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.782743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.782797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.782806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-05-15 08:40:56.782897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-05-15 08:40:56.782961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.782971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.783049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.783118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.783128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.783192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.783314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.783324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.783446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.783622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.783631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.783720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.783920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.783929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.784054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.784136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.784145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.784209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.784280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.784290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.784346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.784467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.784476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.784545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.784625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.784635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.784758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.784953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.784963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.785032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.785174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.785184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.785251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.785317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.785326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.785452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.785520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.785530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.785656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.785808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.785819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.785941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.786015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.786024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.786097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.786237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.786247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.786327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.786390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.786400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.786472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.786608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.786618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.786757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.786880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.786890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.787105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.787172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.787182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.787238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.787375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.787385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.787454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.787529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.787540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.787670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.787870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.787880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.788100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.788171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.788181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.788247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.788458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.788468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.788556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.788624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.788634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.788718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.788786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.788796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.018 [2024-05-15 08:40:56.788985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.789117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.018 [2024-05-15 08:40:56.789127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.018 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.789212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.789340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.789350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.789515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.789661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.789671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.789812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.789879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.789889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.790023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.790081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.790093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.790246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.790370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.790380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.790515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.790645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.790655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.790731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.790864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.790875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.790961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.791017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.791026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.791245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.791320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.791330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.791388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.791461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.791470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.791627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.791686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.791695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.791771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.791828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.791837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.791906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.792094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.792103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.792243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.792316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.792327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.792405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.792538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.792547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.792618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.792686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.792696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.792833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.792893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.792904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.793036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.793093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.793103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.793234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.793365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.793375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.793536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.793681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.793692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.793817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.793941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.793952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.794008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.794071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.794080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.794157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.794320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.794329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.019 qpair failed and we were unable to recover it. 00:29:10.019 [2024-05-15 08:40:56.794467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.794538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.019 [2024-05-15 08:40:56.794549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.794695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.794764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.794774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.794963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.795035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.795045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.795103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.795233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.795244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.795390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.795518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.795527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.795660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.795734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.795743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.795802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.795871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.795880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.795961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.796084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.796093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.796172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.796245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.796255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.796325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.796401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.796411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.796541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.796609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.796618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.796763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.796833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.796843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.796914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.797107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.797117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.797185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.797331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.797341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.797471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.797528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.797538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.797664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.797798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.797809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.797891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.798080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.798089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.798156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.798216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.798226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.798306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.798430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.798441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.798509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.798573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.798582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.798656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.798800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.798809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.798864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.798919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.798928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.798978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.799051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.799061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.799137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.799208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.799217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.799352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.799493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.799503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.799565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.799695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.799704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.799834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.799899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.799908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.800052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.800122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.800132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.800363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.800431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.800441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.800572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.800646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.800656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.020 qpair failed and we were unable to recover it. 00:29:10.020 [2024-05-15 08:40:56.800725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.020 [2024-05-15 08:40:56.800793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.800803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.800946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.801081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.801090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.801290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.801425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.801435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.801513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.801702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.801711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.801843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.801987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.801996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.802064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.802140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.802150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.802228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.802289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.802300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.802366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.802489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.802499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.802621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.802706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.802715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.802778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.802900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.802910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.802996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.803147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.803157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.803307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.803389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.803398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.803525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.803605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.803614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.803683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.803873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.803883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.803941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.804076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.804086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.804167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.804224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.804233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.804294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.804425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.804434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.804499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.804587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.804597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.804732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.804797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.804806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.804871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.804941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.804951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.805035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.805174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.805185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.805253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.805334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.805343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.805405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.805465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.805474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.805535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.805594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.805603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.805674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.805734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.805744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.805811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.805885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.805894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.806016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.806172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.806183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.806373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.806500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.806510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.806585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.806644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.806654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.806778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.806877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.806886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.021 qpair failed and we were unable to recover it. 00:29:10.021 [2024-05-15 08:40:56.806955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.021 [2024-05-15 08:40:56.807019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.807028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.807159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.807240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.807249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.807373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.807496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.807505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.807658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.807731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.807740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.807865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.808041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.808050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.808125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.808185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.808195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.808280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.808353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.808364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.808424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.808478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.808488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.808548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.808673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.808684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.808739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.808877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.808887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.808957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.809033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.809043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.809115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.809238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.809249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.809312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.809460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.809470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.809625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.809700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.809709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.809777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.809914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.809923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.809980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.810056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.810066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.810123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.810261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.810271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.810329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.810397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.810407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.810476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.810539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.810549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.810610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.810678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.810689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.810858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.811071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.811081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.811242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.811378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.811391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.811472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.811620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.811633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.811720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.811802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.811815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.811969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.812099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.812112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.812184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.812262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.812276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.812365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.812494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.812508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.812605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.812733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.812747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.812892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.812970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.812983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.813150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.813244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.813259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.022 [2024-05-15 08:40:56.813346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.813474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.022 [2024-05-15 08:40:56.813487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.022 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.813583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.813719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.813731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.813809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.813974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.813987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.814056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.814116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.814128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.814210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.814288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.814301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.814375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.814447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.814461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.814530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.814662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.814674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.814805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.814945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.814958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.815108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.815242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.815256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.815405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.815467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.815480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.815641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.815723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.815736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.815866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.816009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.816023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.816118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.816194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.816207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.816300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.816388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.816401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.816484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.816549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.816562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.816641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.816700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.816713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.816845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.816910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.816923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.817124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.817333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.817347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.817569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.817722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.817735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.817889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.817981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.817994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.818149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.818239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.818253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.818319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.818561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.818575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.818644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.818779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.818791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.818874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.818957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.818971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.819139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.819289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.819303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.819385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.819512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.819526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.023 qpair failed and we were unable to recover it. 00:29:10.023 [2024-05-15 08:40:56.819619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.023 [2024-05-15 08:40:56.819750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.819764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.819834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.819904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.819917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.820000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.820190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.820205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.820339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.820419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.820432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.820574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.820713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.820727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.820798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.820890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.820903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.820975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.821056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.821068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.821201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.821283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.821297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.821380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.821443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.821455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.821590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.821662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.821676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.821763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.821903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.821916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.822000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.822094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.822107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.822198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.822341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.822355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.822486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.822557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.822571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.822649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.822720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.822733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.822931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.823000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.823015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.823080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.823187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.823201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.823273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.823333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.823346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.823494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.823654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.823667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.823797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.823928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.823941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.824019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.824176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.824190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.824256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.824328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.824341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.824432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.824568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.824580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.824657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.824796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.824810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.824875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.824999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.825013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.825168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.825325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.825341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.825407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.825482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.825495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.825582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.825731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.825746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.825829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.825894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.825907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.825970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.826053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.826065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.024 qpair failed and we were unable to recover it. 00:29:10.024 [2024-05-15 08:40:56.826135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.826214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.024 [2024-05-15 08:40:56.826227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.826359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.826422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.826434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.826567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.826765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.826778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.826849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.826994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.827007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.827148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.827243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.827256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.827387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.827452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.827467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.827614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.827679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.827691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.827767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.827917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.827930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.828073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.828163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.828198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.828280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.828367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.828381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.828514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.828602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.828615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.828782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.828929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.828942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.829092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.829157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.829175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.829319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.829509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.829522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.829671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.829747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.829760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.829905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.830006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.830021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.830087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.830216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.830230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.830313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.830482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.830495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.830633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.830709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.830721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.830858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.830926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.830939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.831022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.831096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.831109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.831182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.831256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.831269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.831364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.831433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.831446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.831659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.831735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.831748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.831829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.831903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.831916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.832046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.832201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.832215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.832299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.832377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.832390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.832462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.832551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.832563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.832627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.832756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.832770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.832834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.832894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.832907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.833065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.833127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.025 [2024-05-15 08:40:56.833140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.025 qpair failed and we were unable to recover it. 00:29:10.025 [2024-05-15 08:40:56.833290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.833488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.833502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.833588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.833657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.833671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.833818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.833950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.833964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.834030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.834091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.834104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.834179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.834264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.834277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.834508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.834575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.834588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.834651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.834902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.834914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.835064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.835154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.835171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.835332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.835421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.835434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.835576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.835649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.835661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.835791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.835936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.835949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.836028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.836104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.836117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.836193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.836391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.836405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.836605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.836735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.836749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.836942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.837090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.837103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.837173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.837318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.837331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.837462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.837615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.837629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.837696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.837757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.837769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.837859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.837937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.837950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.838174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.838356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.838369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.838504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.838647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.838661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.838747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.838892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.838905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.839045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.839125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.839138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.839240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.839316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.839330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.839396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.839520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.839532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.839683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.839814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.839827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.839962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.840162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.840181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.840248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.840308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.840319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.840471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.840534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.840546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.840637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.840711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.840725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.840804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.840957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.840971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.026 qpair failed and we were unable to recover it. 00:29:10.026 [2024-05-15 08:40:56.841064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.026 [2024-05-15 08:40:56.841142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.841154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.841305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.841531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.841544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.841740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.841887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.841900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.841977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.842106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.842119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.842216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.842375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.842388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.842611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.842736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.842749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.842824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.843039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.843052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.843122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.843191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.843204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.843280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.843411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.843425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.843590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.843674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.843688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.843822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.843899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.843912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.844109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.844186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.844200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.844293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.844439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.844453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.844584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.844660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.844673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.844818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.844897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.844909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.845044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.845112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.845124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.845295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.845389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.845402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.845531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.845610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.845623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.845761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.845832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.845844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.845935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.846083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.846098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.846236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.846315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.846327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.846403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.846564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.846577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.846650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.846862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.846875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.846959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.847025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.847038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.847135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.847354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.847370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.847511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.847661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.847674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.847810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.847886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.847899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.847981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.848123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.027 [2024-05-15 08:40:56.848136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.027 qpair failed and we were unable to recover it. 00:29:10.027 [2024-05-15 08:40:56.848292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.848376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.848390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.848541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.848670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.848683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.848772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.848866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.848880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.849028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.849094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.849108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.849246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.849324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.849338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.849430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.849632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.849645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.849711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.849859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.849872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.849944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.850138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.850152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.850233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.850302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.850315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.850381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.850462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.850476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.850660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.850795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.850808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.850901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.850974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.850987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.851079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.851226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.851241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.851410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.851492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.851505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.851664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.851828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.851841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.851973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.852034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.852047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.852201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.852352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.852363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.852526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.852599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.852609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.852761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.852819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.852828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.852962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.853026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.853035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.853186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.853243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.853253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.853314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.853461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.853471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.853557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.853638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.853647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.853714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.853774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.853783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.853869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.853936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.853946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.854018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.854153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.854163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.854221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.854358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.854370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.854546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.854684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.854695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.854756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.854809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.854818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.854981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.855122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.855132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.855261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.855347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.855357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.855426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.855514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.855523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.855587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.855711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.855722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.028 [2024-05-15 08:40:56.855925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.856075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.028 [2024-05-15 08:40:56.856085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.028 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.856216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.856285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.856295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.856365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.856449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.856459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.856531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.856735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.856747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.856880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.856946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.856956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.857085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.857151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.857161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.857234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.857380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.857389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.857469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.857525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.857534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.857673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.857746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.857756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.857826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.857945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.857955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.858094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.858169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.858179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.858251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.858384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.858393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.858447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.858505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.858514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.858579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.858636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.858646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.858711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.858839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.858848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.858973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.859114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.859124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.859188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.859243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.859252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.859305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.859369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.859377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.859447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.859504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.859513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.859586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.859649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.859659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.859808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.859965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.859975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.860046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.860126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.860135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.860197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.860271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.860280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.860408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.860608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.860620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.860821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.860903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.860913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.860991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.861110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.861120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.861263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.861387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.861397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.861542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.861700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.861710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.861851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.861991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.862001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.862067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.862151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.862161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.862356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.862480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.862490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.862574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.862644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.862654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.862814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.862938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.862948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.863023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.863098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.863108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.863184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.863319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.863328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.029 [2024-05-15 08:40:56.863489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.863555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.029 [2024-05-15 08:40:56.863565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.029 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.863701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.863774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.863783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.863874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.864011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.864021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.864113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.864265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.864275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.864402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.864562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.864572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.864699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.864773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.864783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.864863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.864935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.864945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.865079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.865267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.865277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.865338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.865400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.865409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.865479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.865571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.865580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.865707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.865789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.865798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.865868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.865925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.865933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.866059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.866186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.866196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.866265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.866342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.866353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.866499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.866638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.866648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.866705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.866760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.866768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.866825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.866969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.866980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.867128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.867265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.867275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.867346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.867402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.867411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.867487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.867610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.867624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.867720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.867810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.867820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.867900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.867968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.867977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.868052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.868121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.868130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.868196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.868252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.868261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.868326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.868403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.868413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.868488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.868619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.868629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.868697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.868769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.868779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.868860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.868936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.868946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.869027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.869081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.869091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.869171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.869250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.869260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.869315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.869368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.869377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.869501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.869572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.869581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.869643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.869714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.869724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.869853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.869945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.869954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.870030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.870151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.870160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.030 [2024-05-15 08:40:56.870234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.870360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-05-15 08:40:56.870370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.030 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.870431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.870505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.870515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.870575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.870695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.870705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.870780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.870842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.870852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.870930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.870992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.871002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.871126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.871192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.871203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.871330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.871391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.871400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.871477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.871567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.871577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.871641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.871832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.871841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.871986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.872072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.872082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.872148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.872235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.872245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.872400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.872467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.872477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.872600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.872723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.872732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.872801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.872855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.872865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.873003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.873064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.873074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.873204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.873272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.873282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.873356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.873552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.873561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.873685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.873748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.873758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.873892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.873947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.873957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.874037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.874172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.874182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.874308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.874377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.874386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.874455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.874507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.874517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.874581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.874712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.874722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.874791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.874860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.874869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.875103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.875277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.875291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.875381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.875443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.875455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.875586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.875649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.875661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.875819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.875894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.875907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.875979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.876112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.876125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.876208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.876274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.876288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.876421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.876502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.876515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.876660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.876801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-05-15 08:40:56.876814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.031 qpair failed and we were unable to recover it. 00:29:10.031 [2024-05-15 08:40:56.876887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.877041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.877054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.877241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.877380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.877393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.877474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.877579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.877594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.877748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.877833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.877846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.877936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.878066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.878079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.878248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.878386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.878400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.878480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.878554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.878567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.878716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.878793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.878806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.878905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.878995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.879008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.879089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.879159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.879177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.879336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.879400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.879413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.879478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.879615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.879628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.879808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.879888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.879901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.880047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.880198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.880212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.880290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.880432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.880446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.880586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.880674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.880687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.880783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.880931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.880945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.881013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.881083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.881096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.881242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.881310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.881323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.881466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.881554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.881567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.881651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.881729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.881743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.881894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.881983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.881996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.882096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.882174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.882190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.882339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.882416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.882429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.882630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.882789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.882802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.882885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.883037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.883050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.883119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.883200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.883214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.883285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.883426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.883439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.883519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.883597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.883611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.883692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.883773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.883786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.883851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.884045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.884059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.884223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.884289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.884302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.884453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.884599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.884612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.884698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.884777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.884790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.884858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.884946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.884959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.885039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.885133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.885147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.885238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.885302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.885315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.032 qpair failed and we were unable to recover it. 00:29:10.032 [2024-05-15 08:40:56.885390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.885597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.032 [2024-05-15 08:40:56.885610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.885778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.885842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.885855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.885952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.886163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.886181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.886326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.886404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.886417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.886564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.886788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.886801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.886869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.887009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.887022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.887107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.887200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.887214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.887346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.887496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.887509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.887647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.887714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.887727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.887804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.887943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.887956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.888054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.888146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.888159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.888247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.888447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.888460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.888659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.888743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.888756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.888896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.889038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.889052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.889184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.889277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.889290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.889456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.889541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.889555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.889712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.889780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.889793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.889872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.889959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.889972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.890101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.890320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.890334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.890556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.890699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.890713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.890858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.891060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.891073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.891149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.891295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.891309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.891450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.891591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.891604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.891737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.891823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.891836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.891930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.891995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.892009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.892156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.892306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.892320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.892490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.892637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.892652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.892748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.892959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.892969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.893103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.893343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.893353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.893410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.893491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.893501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.893571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.893663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.893673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.893751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.893878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.893887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.894102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.894246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.894256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.894331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.894467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.894477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.894612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.894751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.894761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.894829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.894884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.894893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.895035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.895226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.895236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.033 qpair failed and we were unable to recover it. 00:29:10.033 [2024-05-15 08:40:56.895308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.033 [2024-05-15 08:40:56.895379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.895389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.895466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.895680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.895691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.895754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.895878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.895888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.895956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.896018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.896028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.896099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.896242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.896253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.896324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.896463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.896473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.896544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.896607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.896617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.896671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.896736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.896746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.896814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.896979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.896989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.897054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.897139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.897149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.897209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.897280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.897290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.897440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.897561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.897571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.897646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.897723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.897733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.897817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.897880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.897890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.898016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.898180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.898194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.898255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.898320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.898329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.898458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.898581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.898590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.898649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.898791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.898801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.898943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.899068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.899078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.899148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.899220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.899230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.899361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.899531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.899541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.899731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.899789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.899799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.899890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.899969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.899978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.900044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.900121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.900130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.900264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.900399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.900409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.900468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.900533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.900541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.900703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.900825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.900835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.900894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.901034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.901044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.901236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.901312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.901322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.901395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.901523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.901533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.901675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.901801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.901811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.901977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.902118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.902129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.902190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.902242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.902251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.902377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.902504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.902514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.902589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.902782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.902792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.902849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.902971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.902980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.903061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.903200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.903210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.034 qpair failed and we were unable to recover it. 00:29:10.034 [2024-05-15 08:40:56.903357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.034 [2024-05-15 08:40:56.903575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.903585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.903724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.903793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.903802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.903968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.904034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.904046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.904116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.904341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.904353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.904485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.904577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.904587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.904727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.904862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.904872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.905035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.905090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.905100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.905158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.905295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.905305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.905365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.905601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.905611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.905679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.905762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.905772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.905844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.906034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.906044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.906177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.906265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.906274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.906352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.906422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.906433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.906519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.906582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.906593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.906730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.906855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.906865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.907002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.907055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.907065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.907155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.907237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.907247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.907397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.907475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.907485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.907556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.907691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.907701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.907821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.907887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.907897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.907969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.908087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.908097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.908323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.908519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.908529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.908745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.908808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.908820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.908892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.909017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.909026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.909201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.909334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.909344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.909492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.909560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.909570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.909707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.909843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.909853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.910050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.910113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.910122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.910216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.910352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.910362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.910433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.910510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.910520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.910643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.910776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.910786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.910925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.911052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.911062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.911140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.911263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.911275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.911349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.911420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.911430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.035 qpair failed and we were unable to recover it. 00:29:10.035 [2024-05-15 08:40:56.911556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.035 [2024-05-15 08:40:56.911630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.911639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.911712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.911907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.911917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.911990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.912077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.912086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.912157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.912377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.912387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.912461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.912626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.912636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.912765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.912836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.912846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.913058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.913140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.913150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.913219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.913353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.913363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.913498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.913622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.913632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.913706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.913794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.913804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.913877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.913961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.913971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.914124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.914183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.914194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.914341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.914579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.914589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.914650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.914732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.914742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.914874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.915003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.915013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.915139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.915271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.915281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.915410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.915534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.915544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.915619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.915683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.915693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.915750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.915808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.915817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.915884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.915941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.915950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.916089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.916215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.916225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.916362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.916447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.916457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.916579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.916654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.916664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.916744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.916823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.916833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.916972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.917046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.917055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.917261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.917430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.917440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.917574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.917646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.917656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.917731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.917799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.917809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.917939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.918005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.918015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.918140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.918208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.918219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.918295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.918345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.918355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.918423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.918560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.918570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.918643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.918773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.918782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.918846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.918921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.918931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.918987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.919046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.919055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.919118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.919184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.919193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.919264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.919337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.919347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.919482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.919615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.919625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.919749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.919876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.919886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.036 [2024-05-15 08:40:56.919958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.920040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.036 [2024-05-15 08:40:56.920050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.036 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.920122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.920264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.920274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.920510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.920633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.920643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.920784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.920854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.920863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.920934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.921064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.921074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.921147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.921206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.921216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.921341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.921476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.921485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.921613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.921754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.921764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.921837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.921917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.921926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.922075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.922270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.922281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.922420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.922486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.922496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.922647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.922776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.922786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.922909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.923033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.923043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.923285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.923424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.923434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.923504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.923564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.923574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.923717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.923773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.923782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.923862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.923998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.924007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.924076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.924151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.924161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.924322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.924396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.924405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.924541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.924675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.924685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.924748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.924872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.924882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.925002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.925075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.925085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.925241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.925366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.925376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.925436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.925504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.925513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.925710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.925772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.925781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.925843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.925918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.925928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.926008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.926155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.926168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.926237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.926429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.926438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.926504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.926693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.926702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.926771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.926828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.926837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.926978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.927101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.927112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.927173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.927248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.927258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.927386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.927452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.927462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.927522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.927590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.927600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.927668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.927725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.927735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.927863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.927918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.927927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.928051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.928120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.928130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.928204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.928291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.928301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.928360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.928437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.928446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.928504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.928657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.928667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.037 qpair failed and we were unable to recover it. 00:29:10.037 [2024-05-15 08:40:56.928761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.037 [2024-05-15 08:40:56.928919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.928928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.928994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.929118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.929128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.929194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.929269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.929279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.929338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.929391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.929400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.929460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.929585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.929595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.929724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.929940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.929950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.930033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.930168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.930178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.930252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.930321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.930331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.930392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.930455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.930465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.930605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.930660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.930669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.930738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.930817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.930827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.930969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.931027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.931036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.931103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.931174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.931183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.931242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.931310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.931319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.931396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.931528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.931537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.931615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.931739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.931748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.931818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.931996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.932006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.932126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.932264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.932274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.932440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.932562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.932572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.932633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.932702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.932711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.932854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.932981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.932992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.933068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.933236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.933246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.933381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.933509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.933519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.933592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.933659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.933669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.933737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.933910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.933919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.933996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.934082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.934091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.934247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.934328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.934338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.934406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.934473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.934483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.934624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.934683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.934694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.934834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.934899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.934909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.934993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.935051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.935061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.935188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.935320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.935329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.935453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.935585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.935595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.935789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.935856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.935867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.935938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.935994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.038 [2024-05-15 08:40:56.936003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.038 qpair failed and we were unable to recover it. 00:29:10.038 [2024-05-15 08:40:56.936075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.936277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.936287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.936422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.936505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.936515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.936578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.936707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.936717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.936793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.936938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.936948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.937020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.937074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.937083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.937219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.937352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.937363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.937423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.937502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.937511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.937591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.937649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.937659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.937738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.937856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.937865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.937925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.937981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.937990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.938050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.938115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.938125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.938219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.938292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.938301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.938380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.938573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.938583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.938642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.938696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.938706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.938795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.938867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.938876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.938943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.939011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.939022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.939193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.939259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.939268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.939336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.939411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.939420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.939478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.939616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.939626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.939682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.939811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.939821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.939942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.940008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.940018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.940088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.940152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.940161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.940317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.940380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.940389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.940450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.940513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.940523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.940667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.940722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.940732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.940856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.940979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.940991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.941051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.941111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.941121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.941259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.941324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.941333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.941393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.941449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.941458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.941587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.941662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.941672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.941740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.941801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.941810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.941870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.941949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.941958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.942030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.942156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.942169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.942243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.942432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.942442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.942577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.942639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.942648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.942710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.942785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.942797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.942869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.943044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.943055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.943112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.943254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.943264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.943327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.943387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.943396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.943591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.943662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.943671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.039 [2024-05-15 08:40:56.943796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.943851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.039 [2024-05-15 08:40:56.943860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.039 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.944049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.944170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.944180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.944256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.944445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.944455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.944583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.944645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.944654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.944782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.944848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.944859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.944925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.944988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.944998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.945085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.945219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.945228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.945381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.945442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.945451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.945512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.945570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.945579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.945705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.945765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.945774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.945934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.946129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.946139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.946208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.946266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.946275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.946406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.946462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.946471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.946536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.946603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.946612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.946700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.946818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.946828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.946904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.946975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.946985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.947114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.947173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.947184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.947326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.947460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.947470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.947552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.947621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.947631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.947708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.947834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.947843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.947970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.948051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.948061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.948119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.948182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.948192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.948250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.948381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.948390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.948516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.948572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.948582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.948728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.948793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.948803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.948861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.948928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.948937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.948999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.949121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.949131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.949213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.949339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.949349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.949479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.949540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.949550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.949610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.949684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.949693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.949770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.949835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.949844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.949933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.949992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.950002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.950139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.950266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.950276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.950430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.950504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.950514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.950574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.950703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.950713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.950779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.950850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.950859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.950933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.951057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.951066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.951124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.951211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.951221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.951293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.951450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.951460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.951587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.951647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.951658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.951731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.951785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.951795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.951852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.951926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.951936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.040 [2024-05-15 08:40:56.952001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.952191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.040 [2024-05-15 08:40:56.952201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.040 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.952283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.952349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.952358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.952497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.952638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.952647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.952771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.952830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.952839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.952911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.953034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.953043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.953119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.953249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.953259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.953315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.953392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.953402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.953473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.953524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.953534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.953666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.953736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.953745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.953879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.953945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.953954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.954081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.954208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.954218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.954278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.954351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.954360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.954560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.954629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.954638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.954708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.954789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.954798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.954986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.955050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.955060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.955117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.955194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.955204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.955264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.955389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.955398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.955530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.955679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.955689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.955757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.955822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.955831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.955967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.956089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.956098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.956228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.956308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.956318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.956401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.956467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.956477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.956603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.956754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.956764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.956851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.956970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.956980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.957121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.957251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.957262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.957331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.957396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.957406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.957642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.957780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.957790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.957927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.958085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.958094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.958239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.958301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.958311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.958439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.958591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.958601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.958678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.958747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.958756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.958826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.958899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.958909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.958995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.959062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.959072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.959206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.959330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.959340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.959414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.959472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.959482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.959549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.959629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.959639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.959717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.959794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.959804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.041 [2024-05-15 08:40:56.959861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.959981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.041 [2024-05-15 08:40:56.959991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.041 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.960053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.960230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.960240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.960308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.960433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.960443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.960508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.960564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.960575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.960636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.960718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.960728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.960790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.960856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.960866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.961014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.961084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.961093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.961160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.961312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.961322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.961394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.961467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.961476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.961563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.961636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.961646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.961711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.961769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.961778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.961846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.961900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.961910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.961986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.962054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.962063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.962128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.962190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.962200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.962269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.962332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.962342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.962414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.962476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.962486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.962559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.962686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.962696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.962770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.962845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.962854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.962917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.962987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.962997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.963055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.963122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.963131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.963292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.963353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.963362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.963438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.963514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.963523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.963582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.963722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.963731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.042 qpair failed and we were unable to recover it. 00:29:10.042 [2024-05-15 08:40:56.963823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.042 [2024-05-15 08:40:56.963951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.963961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.964031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.964155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.964168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.964366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.964428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.964438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.964509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.964705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.964715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.964793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.964918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.964928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.964990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.965050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.965060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.965189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.965251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.965260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.965332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.965470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.965480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.965554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.965687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.965696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.965773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.965906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.965916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.965989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.966042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.966051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.966176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.966246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.966256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.966312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.966499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.966509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.966633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.966704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.966713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.966795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.966922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.966933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.966993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.967050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.967059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.967127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.967292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.967303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.967366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.967424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.967433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.967625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.967697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.967707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.967768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.967825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.967834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.967955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.968013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.968023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.968088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.968154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.968166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.968371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.968458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.968468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.968593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.968680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.968689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.968765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.968843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.968854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.969047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.969181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.969191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.969416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.969495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.969506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.969701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.969835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.969845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.043 qpair failed and we were unable to recover it. 00:29:10.043 [2024-05-15 08:40:56.969901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.043 [2024-05-15 08:40:56.969985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.969994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.970161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.970376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.970386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.970581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.970728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.970737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.970807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.970947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.970956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.971103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.971253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.971263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.971387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.971512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.971521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.971645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.971769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.971780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.971941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.972074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.972083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.972139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.972328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.972337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.972408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.972547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.972557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.972636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.972695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.972705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.972843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.973036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.973046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.973174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.973308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.973317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.973391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.973514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.973524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.973691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.973764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.973773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.973842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.973965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.973975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.974034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.974090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.974101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.974156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.974315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.974326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.974458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.974531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.974540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.974670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.974733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.974743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.974825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.974899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.974909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.975040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.975172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.975182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.975252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.975338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.975347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.975419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.975496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.975506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.975578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.975713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.975722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.975847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.975917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.975927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.976062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.976202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.976215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.044 qpair failed and we were unable to recover it. 00:29:10.044 [2024-05-15 08:40:56.976286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.976369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.044 [2024-05-15 08:40:56.976380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.976569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.976708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.976719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.976846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.976910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.976920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.977047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.977194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.977203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.977341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.977485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.977494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.977559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.977694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.977703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.977760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.977900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.977910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.977983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.978053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.978062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.978127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.978185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.978194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.978267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.978336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.978346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.978408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.978478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.978487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.978613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.978739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.978748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.978891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.979023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.979033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.979115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.979187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.979197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.979318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.979388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.979398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.979524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.979611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.979622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.979763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.979830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.979839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.979904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.979978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.979987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.980046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.980110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.980120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.980243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.980364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.980373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.980506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.980584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.980593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.980671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.980862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.980871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.980963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.981114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.981124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.981250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.981378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.981387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.981458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.981603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.981612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.981736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.981895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.981904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.981974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.982053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.982062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.982204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.982277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.045 [2024-05-15 08:40:56.982287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.045 qpair failed and we were unable to recover it. 00:29:10.045 [2024-05-15 08:40:56.982352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.982406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.982415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.982479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.982547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.982557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.982641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.982788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.982798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.982870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.983006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.983016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.983155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.983215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.983225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.983298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.983360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.983370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.983435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.983512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.983523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.983648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.983708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.983717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.983801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.983924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.983934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.984069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.984159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.984172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.984235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.984295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.984304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.984448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.984517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.984527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.984654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.984806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.984816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.984947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.985013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.985023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.985149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.985227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.985237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.985295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.985445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.985455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.985608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.985672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.985681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.985758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.985894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.985904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.986032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.986235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.986245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.986312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.986445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.986454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.986534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.986587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.986596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.986676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.986739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.986748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.046 qpair failed and we were unable to recover it. 00:29:10.046 [2024-05-15 08:40:56.986811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.046 [2024-05-15 08:40:56.986905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.986915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.986977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.987104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.987114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.987172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.987358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.987368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.987447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.987530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.987539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.987672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.987726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.987736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.987875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.987934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.987944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.988003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.988126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.988137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.988203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.988290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.988299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.988385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.988590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.988599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.988658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.988776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.988786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.988862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.988940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.988950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.989010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.989133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.989143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.989207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.989288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.989299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.989526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.989650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.989660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.989803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.989879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.989889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.990025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.990231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.990241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.990295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.990518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.990528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.990663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.990744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.990753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.990943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.991010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.991019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.991093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.991244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.991254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.991416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.991569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.991583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.991663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.991751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.991764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.991910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.992078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.992092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.992226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.992318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.992331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.992406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.992559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.992572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.992669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.992832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.992845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.992993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.993073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.993086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.993222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.993299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.047 [2024-05-15 08:40:56.993312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.047 qpair failed and we were unable to recover it. 00:29:10.047 [2024-05-15 08:40:56.993444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.993584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.993598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-05-15 08:40:56.993664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.993761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.993774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-05-15 08:40:56.993881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.993985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.994000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-05-15 08:40:56.994227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.994369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.994382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-05-15 08:40:56.994538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.994683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.994696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-05-15 08:40:56.994846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.994922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.994935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-05-15 08:40:56.995003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.995174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.995188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-05-15 08:40:56.995268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.995358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.995371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-05-15 08:40:56.995441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.995636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.995649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-05-15 08:40:56.995732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.995866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.995880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-05-15 08:40:56.996029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.996110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.996125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-05-15 08:40:56.996191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.996284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.996297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-05-15 08:40:56.996433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.996500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.996513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.048 [2024-05-15 08:40:56.996592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.996674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.048 [2024-05-15 08:40:56.996688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.048 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:56.996835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.996921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.996935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:56.997092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.997219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.997233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:56.997367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.997445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.997459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:56.997538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.997596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.997609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:56.997686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.997749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.997762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:56.997828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.997903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.997917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:56.998055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.998203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.998217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:56.998291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.998418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.998432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:56.998507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.998585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.998598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:56.998731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.998800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.998812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:56.998970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.999051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.999064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:56.999148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.999230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.999244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:56.999381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.999530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.999543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:56.999681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.999815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.999828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:56.999895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.999959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:56.999971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:57.000104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:57.000240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:57.000254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:57.000425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:57.000562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:57.000575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:57.000741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:57.000833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:57.000846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:57.000994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:57.001093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:57.001106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.330 [2024-05-15 08:40:57.001182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:57.001381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.330 [2024-05-15 08:40:57.001394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.330 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.001545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.001683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.001695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.001775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.001928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.001941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.002139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.002360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.002374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.002440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.002590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.002603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.002677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.002761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.002774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.002931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.003001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.003015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.003153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.003255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.003270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.003399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.003644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.003658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.003726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.003876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.003890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.003988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.004125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.004139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.004277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.004357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.004370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.004541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.004637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.004649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.004805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.004965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.004978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.005055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.005239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.005252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.005391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.005540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.005553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.005645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.005718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.005731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.005890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.006015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.006028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.006114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.006335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.006348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.006494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.006588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.006603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.006697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.006848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.006860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.006944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.007024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.007037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.007106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.007187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.007200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.007283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.007351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.007363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.007443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.007572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.007585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.007659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.007795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.007809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.008030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.008097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.008109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.331 [2024-05-15 08:40:57.008185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.008326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.331 [2024-05-15 08:40:57.008339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.331 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.008509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.008680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.008693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.008838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.008910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.008925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.009059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.009133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.009146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.009328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.009455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.009469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.009639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.009723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.009736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.009814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.009906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.009919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.010075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.010240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.010255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.010337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.010426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.010439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.010533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.010605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.010618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.010680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.010761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.010775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.010846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.010919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.010931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.010999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.011142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.011157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.011305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.011373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.011386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.011471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.011614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.011626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.011769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.011913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.011926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.011992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.012134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.012147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.012299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.012431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.012444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.012528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.012591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.012604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.012736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.012885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.012898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.013042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.013123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.013136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.013230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.013378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.013392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.013479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.013606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.013620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.013689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.013764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.013777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.013852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.013985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.013998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.014074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.014205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.014219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.014296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.014372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.014385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.332 qpair failed and we were unable to recover it. 00:29:10.332 [2024-05-15 08:40:57.014534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.332 [2024-05-15 08:40:57.014615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.014628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.014782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.014924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.014936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.015035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.015123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.015136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.015274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.015361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.015375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.015468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.015528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.015541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.015681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.015747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.015760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.015910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.015997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.016009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.016088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.016157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.016174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.016263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.016417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.016430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.016517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.016684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.016698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.016786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.016925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.016938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.017019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.017103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.017115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.017185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.017313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.017326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.017391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.017453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.017465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.017533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.017607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.017619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.017816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.017956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.017970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.018106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.018196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.018210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.018341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.018471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.018484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.018564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.018625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.018640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.018730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.018813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.018826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.018976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.019175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.019189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.019269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.019409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.019423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.019553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.019756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.019770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.019912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.019988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.020000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.020086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.020150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.020163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.020310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.020394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.020407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.020551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.020632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.020645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.333 [2024-05-15 08:40:57.020720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.020782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.333 [2024-05-15 08:40:57.020795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.333 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.020922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.021052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.021065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.021129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.021290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.021304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.021388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.021469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.021482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.021629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.021713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.021726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.021872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.021946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.021959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.022022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.022088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.022101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.022236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.022364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.022378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.022512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.022583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.022596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.022661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.022787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.022800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.022947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.023100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.023129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.023255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.023353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.023382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.023555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.023800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.023829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.024059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.024188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.024218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.024321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.024503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.024532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.024792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.024949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.024977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.025196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.025264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.025277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.025475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.025676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.025689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.025832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.025937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.025966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.026135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.026311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.026341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.026536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.026694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.026730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.026953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.027102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.027130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.027323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.027524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.027553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.027670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.027793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.027822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.027932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.028059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.028073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.028285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.028458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.028487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.028758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.028931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.028944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.334 qpair failed and we were unable to recover it. 00:29:10.334 [2024-05-15 08:40:57.029021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.334 [2024-05-15 08:40:57.029187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.029200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.029346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.029422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.029435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.029580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.029728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.029742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.029820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.029905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.029918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.030062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.030219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.030233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.030391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.030534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.030547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.030710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.030852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.030865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.030954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.031087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.031100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.031181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.031248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.031260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.031326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.031458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.031470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.031602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.031808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.031837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.031971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.032067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.032094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.032224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.032406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.032435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.032630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.032798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.032826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.033013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.033198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.033227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.033397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.033670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.033699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.034006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.034090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.034104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.034246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.034321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.034335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.034551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.034625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.034638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.034857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.035020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.035048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.035327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.035434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.035462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.035692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.035903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.035931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.036053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.036146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.335 [2024-05-15 08:40:57.036160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.335 qpair failed and we were unable to recover it. 00:29:10.335 [2024-05-15 08:40:57.036310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.036396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.036409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.036492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.036692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.036721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.036896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.037064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.037092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.037264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.037404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.037418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.037495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.037717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.037730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.037799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.037964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.037993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.038095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.038215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.038246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.038435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.038596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.038624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.038744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.038835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.038863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.038981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.039182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.039211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.039401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.039581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.039610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.039777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.039949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.039977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.040145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.040245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.040275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.040407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.040569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.040598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.040710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.040872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.040901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.041002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.041178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.041207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.041337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.041462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.041491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.041658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.041892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.041921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.042187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.042252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.042265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.042362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.042448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.042461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.042622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.042697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.042710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.042848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.043052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.043066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.043208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.043289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.043302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.043464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.043625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.043638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.043715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.043953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.043967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.044109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.044270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.044283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.044467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.044683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.044697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.336 [2024-05-15 08:40:57.044782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.044871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.336 [2024-05-15 08:40:57.044883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.336 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.045028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.045196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.045210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.045380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.045460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.045473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.045543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.045625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.045639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.045785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.045928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.045942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.046178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.046357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.046385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.046490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.046653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.046681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.046804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.046908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.046936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.047058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.047121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.047135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.047200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.047269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.047281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.047361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.047428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.047441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.047507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.047727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.047741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.047894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.048044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.048060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.048138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.048217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.048230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.048300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.048432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.048444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.048601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.048695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.048708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.048911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.048987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.049000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.049068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.049149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.049162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.049239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.049318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.049331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.049398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.049478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.049490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.049642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.049712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.049725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.049807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.049891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.049905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.049981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.050056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.050071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.050204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.050290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.050302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.050382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.050455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.050468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.050543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.050609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.050622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.050697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.050761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.050776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.050865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.050996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.051010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.337 qpair failed and we were unable to recover it. 00:29:10.337 [2024-05-15 08:40:57.051150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.337 [2024-05-15 08:40:57.051300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.051314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.051470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.051544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.051557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.051685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.051764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.051777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.051956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.052106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.052119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.052221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.052298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.052313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.052464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.052541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.052554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.052710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.052802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.052817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.052960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.053091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.053105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.053190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.053317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.053331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.053427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.053508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.053522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.053669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.053747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.053760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.053826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.053910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.053924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.054010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.054205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.054219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.054291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.054358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.054370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.054568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.054684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.054718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.054819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.054980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.055009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.055119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.055237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.055251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.055313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.055537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.055566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.055736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.055841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.055868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.056045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.056219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.056249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.056364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.056564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.056593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.056828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.057022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.057051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.057163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.057353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.057367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.057463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.057625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.057639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.057781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.057983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.058011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.058127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.058271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.058300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.058411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.058502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.058531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.058745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.058900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.058929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.338 qpair failed and we were unable to recover it. 00:29:10.338 [2024-05-15 08:40:57.059040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.338 [2024-05-15 08:40:57.059224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.059254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.059420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.059598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.059628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.059724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.059833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.059861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.059971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.060218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.060232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.060288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.060368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.060380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.060464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.060604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.060617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.060760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.060956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.060970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.061203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.061329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.061357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.061466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.061559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.061587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.061706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.061809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.061837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.062022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.062203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.062233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.062332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.062500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.062529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.062722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.062820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.062848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.063008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.063151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.063172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.063269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.063433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.063445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.063668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.063750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.063763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.063975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.064119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.064148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.064365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.064593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.064623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.064804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.064983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.065012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.065184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.065417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.065430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.065629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.065860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.065874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.065957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.066089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.066102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.066247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.066354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.066384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.339 qpair failed and we were unable to recover it. 00:29:10.339 [2024-05-15 08:40:57.066617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.339 [2024-05-15 08:40:57.066734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.066763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.066872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.067011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.067024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.067087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.067217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.067230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.067447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.067611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.067640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.067764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.067973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.068002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.068108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.068273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.068287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.068502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.068572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.068585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.068663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.068792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.068805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.068887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.069085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.069099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.069249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.069405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.069418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.069505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.069600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.069613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.069826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.069990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.070003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.070146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.070235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.070251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.070395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.070606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.070635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.070819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.071068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.071098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.071293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.071424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.071437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.071517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.071684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.071697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.071840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.071933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.071946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.072177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.072426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.072455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.072584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.072847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.072876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.073104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.073202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.073216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.073392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.073485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.073498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.073589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.073672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.073685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.073772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.073917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.073930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.074081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.074192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.074206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.074353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.074444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.074457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.074589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.074727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.074740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.074882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.075064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.340 [2024-05-15 08:40:57.075095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.340 qpair failed and we were unable to recover it. 00:29:10.340 [2024-05-15 08:40:57.075291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.075382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.075411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.075588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.075759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.075788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.076018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.076189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.076219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.076359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.076485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.076514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.076692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.076851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.076880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.077052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.077160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.077178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.077415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.077596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.077625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.077815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.077976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.078004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.078192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.078314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.078342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.078460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.078569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.078599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.078710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.078802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.078830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.078942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.079046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.079075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.079320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.079468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.079482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.079627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.079708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.079722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.079801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.079874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.079887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.079963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.080028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.080039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.080067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e770 (9): Bad file descriptor 00:29:10.341 [2024-05-15 08:40:57.080269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.080345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.080358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.080506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.080561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.080570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.080704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.080923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.080953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.081118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.081243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.081273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.081477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.081584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.081613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.081810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.081983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.082013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.082144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.082300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.082324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.082483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.082673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.082682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.082833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.082914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.082924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.082990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.083053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.083062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.341 [2024-05-15 08:40:57.083140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.083297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.341 [2024-05-15 08:40:57.083328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.341 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.083443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.083643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.083672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.083791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.083969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.083978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.084144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.084222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.084232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.084289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.084346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.084355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.084423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.084492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.084502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.084568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.084632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.084641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.084711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.084864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.084894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.085058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.085183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.085213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.085383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.085480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.085507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.085635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.085745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.085774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.085876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.085955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.085964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.086040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.086162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.086176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.086236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.086424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.086434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.086490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.086560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.086570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.086705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.086765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.086775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.086906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.086985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.086994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.087079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.087144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.087154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.087231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.087293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.087302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.087370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.087438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.087447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.087517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.087670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.087679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.087841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.088036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.088065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.088245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.088354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.088383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.088560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.088747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.088776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.088890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.089062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.089072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.089227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.089283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.089295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.089417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.089607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.089617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.089730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.089931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.089960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.342 qpair failed and we were unable to recover it. 00:29:10.342 [2024-05-15 08:40:57.090062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.342 [2024-05-15 08:40:57.090290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.090321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.090586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.090765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.090800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.091006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.091111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.091141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.091386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.091548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.091576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.091681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.091788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.091816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.092013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.092084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.092093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.092239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.092376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.092386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.092454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.092587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.092597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.092671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.092750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.092759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.092883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.093087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.093116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.093240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.093348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.093377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.093560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.093666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.093695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.093855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.094045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.094081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.094260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.094327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.094341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.094410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.094484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.094498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.094631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.094715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.094729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.094815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.094885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.094898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.095030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.095106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.095118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.095201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.095268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.095281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.095347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.095515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.095529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.095605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.095671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.095684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.095767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.095851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.095863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86ac000b90 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.095934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.096005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.096020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.096105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.096185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.096200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.096371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.096441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.096455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.096594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.096661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.096675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.096741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.096812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.096826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.343 qpair failed and we were unable to recover it. 00:29:10.343 [2024-05-15 08:40:57.096893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.343 [2024-05-15 08:40:57.096965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.096975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.097124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.097264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.097275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.097337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.097429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.097439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.097564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.097689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.097698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.097803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.097916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.097945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.098120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.098249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.098280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.098372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.098508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.098517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.098580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.098655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.098665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.098734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.098797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.098806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.098881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.098965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.098975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.099051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.099108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.099118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.099261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.099316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.099326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.099402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.099564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.099574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.099659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.099728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.099737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.099897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.100030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.100039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.100101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.100243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.100252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.100385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.100449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.100457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.100542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.100606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.100615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.100675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.100801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.100810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.100931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.101051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.101059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.101200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.101328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.101337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.101401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.101458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.101466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.101530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.101680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.101689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.344 qpair failed and we were unable to recover it. 00:29:10.344 [2024-05-15 08:40:57.101750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.344 [2024-05-15 08:40:57.101872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.101880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.101958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.102092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.102101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.102174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.102321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.102333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.102393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.102461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.102469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.102605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.102657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.102666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.102738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.102812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.102822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.103007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.103078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.103088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.103228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.103359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.103368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.103441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.103497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.103505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.103589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.103648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.103657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.103789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.103868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.103877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.103936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.103993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.104002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.104214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.104286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.104298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.104361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.104447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.104456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.104526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.104588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.104597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.104679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.104744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.104754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.104837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.104892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.104903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.104974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.105032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.105041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.105111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.105232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.105242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.105330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.105384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.105392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.105481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.105606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.105615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.105682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.105821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.105830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.105902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.105961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.105972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.106035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.106093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.106102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.106183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.106250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.106260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.106324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.106401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.106410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.106469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.106592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.106601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.345 [2024-05-15 08:40:57.106680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.106802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.345 [2024-05-15 08:40:57.106811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.345 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.106869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.106925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.106935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.106991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.107059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.107068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.107125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.107186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.107196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.107258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.107329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.107338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.107465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.107521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.107532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.107690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.107751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.107760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.107839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.107895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.107904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.108034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.108093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.108102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.108175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.108294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.108304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.108361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.108437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.108446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.108505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.108570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.108578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.108634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.108690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.108699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.108832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.108888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.108898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.108967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.109110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.109119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.109177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.109305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.109315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.109380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.109536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.109546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.109764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.109829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.109838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.109923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.110116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.110126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.110198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.110324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.110334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.110404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.110474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.110484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.110550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.110616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.110626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.110768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.110892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.110902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.111041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.111098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.111108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.111186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.111255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.111265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.111495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.111552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.111561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.111630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.111792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.111802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.111928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.112120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.112129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.346 qpair failed and we were unable to recover it. 00:29:10.346 [2024-05-15 08:40:57.112274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.346 [2024-05-15 08:40:57.112331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.112341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.112563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.112695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.112704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.112833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.112972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.112982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.113057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.113119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.113129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.113252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.113393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.113408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.113545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.113623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.113633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.113755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.113822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.113831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.113902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.114035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.114045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.114109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.114265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.114275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.114336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.114404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.114414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.114549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.114613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.114623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.114759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.114881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.114890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.114954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.115023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.115033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.115085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.115218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.115228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.115358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.115425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.115435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.115493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.115557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.115566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.115624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.115686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.115696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.115769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.115829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.115839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.115918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.116049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.116059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.116113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.116248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.116258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.116323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.116380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.116389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.116442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.116515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.116528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.116581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.116644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.116653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.116745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.116800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.116810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.116863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.116918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.116927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.116985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.117047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.117056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.117112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.117170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.117179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.347 qpair failed and we were unable to recover it. 00:29:10.347 [2024-05-15 08:40:57.117301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.347 [2024-05-15 08:40:57.117420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.117429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.117495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.117623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.117633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.117714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.117848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.117858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.117931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.118057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.118067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.118135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.118216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.118226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.118283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.118406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.118415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.118472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.118619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.118629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.118706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.118777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.118787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.118863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.118937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.118947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.119003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.119060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.119070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.119136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.119203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.119213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.119285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.119348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.119357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.119423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.119588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.119597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.119668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.119817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.119827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.119894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.119963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.119972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.120049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.120108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.120117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.120191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.120256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.120266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.120326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.120380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.120390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.120457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.120515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.120525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.120583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.120654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.120664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.120726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.120786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.120796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.120859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.120927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.120941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.121023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.121112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.121125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.121192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.121318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.121331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.121460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.121525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.121538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.121606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.121740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.121753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.121845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.121927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.121940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.122137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.122232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.122241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.348 qpair failed and we were unable to recover it. 00:29:10.348 [2024-05-15 08:40:57.122299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.348 [2024-05-15 08:40:57.122465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.122475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.122560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.122619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.122628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.122751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.122828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.122837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.122979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.123044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.123053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.123132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.123209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.123219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.123348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.123473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.123482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.123551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.123607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.123617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.123698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.123766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.123775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.123904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.123977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.123986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.124038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.124160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.124174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.124370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.124437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.124446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.124511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.124649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.124658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.124719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.124865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.124874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.125016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.125093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.125103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.125236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.125384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.125394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.125539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.125620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.125629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.125762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.125843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.125852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.125921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.125993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.126002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.126074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.126128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.126137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.126330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.126394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.126404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.126487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.126553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.126563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.126634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.126833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.126842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.126909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.126970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.126979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.349 qpair failed and we were unable to recover it. 00:29:10.349 [2024-05-15 08:40:57.127148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.349 [2024-05-15 08:40:57.127292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.127302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.127371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.127496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.127505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.127642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.127809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.127819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.127873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.127952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.127961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.128022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.128171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.128181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.128385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.128447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.128456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.128589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.128660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.128670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.128731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.128792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.128801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.128938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.128996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.129005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.129067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.129145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.129154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.129332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.129470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.129480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.129555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.129610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.129619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.129700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.129758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.129768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.129893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.129960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.129970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.130109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.130279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.130289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.130415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.130547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.130557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.130702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.130836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.130845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.130914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.130978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.130987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.131049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.131113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.131123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.131195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.131351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.131361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.131422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.131495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.131506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.131645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.131767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.131776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.131851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.131916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.131925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.132073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.132150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.132160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.132238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.132299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.132309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.132371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.132425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.132434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.132493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.132621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.132631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.350 [2024-05-15 08:40:57.132700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.132824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.350 [2024-05-15 08:40:57.132834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.350 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.132893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.132948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.132958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.133016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.133271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.133281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.133419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.133545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.133557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.133613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.133679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.133688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.133756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.133890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.133900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.134022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.134094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.134104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.134161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.134230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.134239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.134366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.134442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.134451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.134522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.134663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.134673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.134796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.135010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.135020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.135078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.135213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.135223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.135294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.135371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.135380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.135448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.135517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.135529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.135672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.135812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.135821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.135884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.136018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.136028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.136112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.136188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.136198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.136269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.136323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.136333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.136462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.136537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.136547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.136673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.136740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.136749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.136816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.136878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.136888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.136967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.137039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.137049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.137182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.137243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.137253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.137319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.137451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.137463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.137518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.137667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.137676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.137829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.137957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.137967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.138113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.138187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.138197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.138265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.138323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.138334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.351 qpair failed and we were unable to recover it. 00:29:10.351 [2024-05-15 08:40:57.138461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.351 [2024-05-15 08:40:57.138627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.138637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.138706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.138848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.138858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.138930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.139056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.139065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.139188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.139283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.139292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.139362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.139440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.139449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.139572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.139670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.139679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.139747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.139832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.139842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.139972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.140111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.140122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.140250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.140330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.140340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.140481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.140563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.140573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.140701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.140768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.140778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.140848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.140906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.140916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.141088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.141215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.141226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.141295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.141350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.141360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.141440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.141509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.141518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.141579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.141712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.141721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.141860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.141923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.141933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.142070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.142218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.142229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.142354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.142485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.142495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.142627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.142694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.142704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.142766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.142823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.142832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.142981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.143054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.143063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.143189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.143315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.143325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.143400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.143532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.143541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.143602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.143679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.143688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.143829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.143911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.143921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.144051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.144112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.144122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.144202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.144259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.144269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.352 qpair failed and we were unable to recover it. 00:29:10.352 [2024-05-15 08:40:57.144446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.352 [2024-05-15 08:40:57.144511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.144521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.144650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.144714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.144724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.144797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.144867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.144877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.145005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.145078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.145089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.145222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.145286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.145297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.145352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.145491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.145501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.145586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.145723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.145734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.145805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.145995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.146009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.146098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.146246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.146257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.146334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.146389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.146399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.146479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.146556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.146566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.146636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.146828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.146837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.146963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.147016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.147026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.147085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.147155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.147169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.147252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.147333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.147343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.147463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.147598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.147607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.147788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.147859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.147869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.147957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.148035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.148044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.148105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.148162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.148176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.148254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.148328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.148337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.148408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.148472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.148481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.148568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.148698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.148708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.148771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.148897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.148907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.148972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.149029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.149038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.149107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.149174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.149184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.149262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.149318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.149328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.149392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.149443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.149455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.353 qpair failed and we were unable to recover it. 00:29:10.353 [2024-05-15 08:40:57.149529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.149585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.353 [2024-05-15 08:40:57.149594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.149657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.149725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.149735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.149793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.149863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.149874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.149945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.150018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.150027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.150096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.150145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.150154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.150218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.150267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.150275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.150426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.150478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.150487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.150557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.150678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.150687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.150810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.150879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.150889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.150964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.151025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.151035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.151107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.151229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.151239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.151306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.151432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.151441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.151529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.151587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.151597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.151664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.151738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.151748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.151804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.151952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.151962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.152023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.152088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.152099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.152238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.152293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.152302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.152363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.152433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.152442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.152508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.152570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.152579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.152641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.152782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.152792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.152868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.152935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.152945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.153015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.153071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.153081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.153148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.153237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.153247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.354 [2024-05-15 08:40:57.153309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.153460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-05-15 08:40:57.153470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.354 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.153525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.153582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.153591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.153669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.153728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.153738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.153799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.153855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.153864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.153925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.153979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.153988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.154048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.154184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.154199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.154272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.154348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.154357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.154443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.154512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.154521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.154599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.154737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.154746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.154805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.154861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.154870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.154948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.155018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.155028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.155091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.155151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.155161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.155232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.155288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.155297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.155359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.155417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.155426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.155490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.155554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.155563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.155628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.155706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.155716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.155793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.155850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.155860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.155916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.155982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.155992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.156047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.156107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.156117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.156177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.156233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.156242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.156313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.156374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.156383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.156447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.156504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.156513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.156584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.156643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.156652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.156709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.156876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.156885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.156952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.157020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.157029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.157159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.157233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.157243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.157302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.157364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.157373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.157431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.157505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-05-15 08:40:57.157515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.355 qpair failed and we were unable to recover it. 00:29:10.355 [2024-05-15 08:40:57.157574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.157641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.157650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.157716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.157859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.157868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.157931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.157992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.158001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.158071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.158128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.158138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.158225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.158281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.158291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.158346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.158404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.158413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.158474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.158537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.158546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.158609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.158667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.158676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.158736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.158789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.158799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.158863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.158990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.159000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.159140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.159206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.159216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.159283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.159347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.159356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.159415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.159473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.159483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.159540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.159598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.159607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.159668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.159731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.159740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.159800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.159853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.159862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.159938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.160062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.160072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.160131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.160195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.160205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.160259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.160316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.160325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.160384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.160446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.160455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.160517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.160570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.160581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.160651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.160704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.160714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.160783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.160840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.160850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.160923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.160979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.160988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.161042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.161096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.161106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.161175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.161233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.161242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.161369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.161436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.161445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.161520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.161576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.161585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.356 [2024-05-15 08:40:57.161655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.161711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-05-15 08:40:57.161720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.356 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.161845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.161920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.161929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.161985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.162109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.162120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.162192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.162262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.162271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.162394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.162448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.162457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.162517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.162652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.162661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.162733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.162792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.162801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.162861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.162985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.162994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.163142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.163200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.163210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.163267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.163352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.163361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.163419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.163479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.163488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.163562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.163627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.163637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.163694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.163745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.163756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.163817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.163878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.163887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.164008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.164077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.164087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.164144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.164198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.164208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.164336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.164478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.164488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.164552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.164612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.164621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.164698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.164833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.164843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.164902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.164972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.164981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.165040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.165109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.165118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.165267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.165417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.165426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.165496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.165591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.165601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.165664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.165731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.165740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.165864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.165924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.165933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.165995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.166054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.166063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.166137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.166202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.166211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.166277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.166333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.166342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.166406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.166545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.166555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.166628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.166702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.166711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.357 qpair failed and we were unable to recover it. 00:29:10.357 [2024-05-15 08:40:57.166770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.166826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.357 [2024-05-15 08:40:57.166835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.166906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.167032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.167041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.167099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.167174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.167184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.167265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.167436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.167445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.167505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.167566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.167575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.167633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.167701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.167711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.167783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.167917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.167927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.167986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.168129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.168139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.168218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.168287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.168298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.168366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.168434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.168444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.168522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.168584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.168594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.168649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.168708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.168717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.168801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.168886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.168896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.169020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.169080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.169089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.169148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.169279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.169289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.169342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.169392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.169402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.169527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.169582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.169591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.169668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.169725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.169734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.169820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.169889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.169899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.170024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.170103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.170113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.170172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.170240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.170250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.170408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.170467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.170476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.170534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.170659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.170668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.170733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.170796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.170805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.170883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.170954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.170963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.171039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.171144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.358 [2024-05-15 08:40:57.171157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-05-15 08:40:57.171240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.171332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.171341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.171400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.171476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.171485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.171647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.171802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.171811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.171956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.172100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.172109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.172302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.172386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.172395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.172542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.172697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.172706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.172769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.173058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.173067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.173149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.173243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.173253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.173401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.173569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.173580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.173823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.174072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.174083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.174213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.174351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.174361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.174502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.174693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.174702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.174863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.174939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.174949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.175137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.175216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.175226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.175351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.175566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.175575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.175748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.175976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.175986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.176238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.176454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.176463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.176680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.176915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.176924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.177078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.177287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.177297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.177438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.177514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.177524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.177684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.177808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.177817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.177878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.178013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.178022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.178103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.178223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.178233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.178299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.178367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.178376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.178520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.178601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.178610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.178736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.178818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.178827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.178968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.179026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.179036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.179093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.179152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.179161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.179236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.179304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.179313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.179459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.179523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.179533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.179624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.179694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.179703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-05-15 08:40:57.179846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.179928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.359 [2024-05-15 08:40:57.179938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.180143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.180201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.180210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.180271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.180457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.180467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.180530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.180673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.180682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.180761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.180883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.180892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.180969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.181102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.181111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.181306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.181373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.181383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.181513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.181597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.181607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.181777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.181917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.181927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.182132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.182270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.182280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.182413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.182524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.182533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.182622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.182810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.182820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.182945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.183079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.183088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.183283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.183354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.183364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.183430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.183526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.183535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.183686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.183825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.183835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.183984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.184110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.184119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.184186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.184345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.184354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.184432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.184554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.184563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.184713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.184901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.184910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.185128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.185272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.185282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.185469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.185681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.185691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.185812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.185950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.185960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.186096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.186304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.186314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.186506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.186714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.186723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.186974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.187161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.187189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.187336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.187428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.187438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.187653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.187730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.187739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.187802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.187943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.187953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.188095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.188227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.188237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.188455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.188696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.188706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.188851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.189079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.189088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.189245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.189498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.360 [2024-05-15 08:40:57.189508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-05-15 08:40:57.189703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.189892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.189901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.190067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.190202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.190212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.190351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.190488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.190498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.190690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.190784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.190794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.191010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.191145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.191155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.191402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.191633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.191643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.191770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.191909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.191919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.191977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.192171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.192181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.192329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.192453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.192463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.192694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.192896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.192906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.193094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.193336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.193347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.193412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.193557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.193566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.193713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.193834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.193843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.194044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.194273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.194283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.194357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.194439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.194449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.194600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.194816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.194826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.195028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.195184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.195194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.195330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.195536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.195546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.195735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.195887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.195896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.196087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.196154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.196167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.196366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.196581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.196590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.196820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.196986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.196996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.197224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.197349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.197359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.197565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.197777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.197788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.198004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.198220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.198230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.198428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.198506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.198516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.198707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.198955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.198965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.199202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.199363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.199373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.199452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.199595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.199605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.199768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.199932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.199942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.200181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.200309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.200318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.200457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.200672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.200682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.361 qpair failed and we were unable to recover it. 00:29:10.361 [2024-05-15 08:40:57.200911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.361 [2024-05-15 08:40:57.201047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.201057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.201225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.201294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.201305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.201513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.201729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.201739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.201960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.202098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.202107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.202305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.202450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.202460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.202609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.202748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.202758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.202893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.203107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.203117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.203367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.203492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.203502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.203657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.203872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.203881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.204078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.204301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.204311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.204501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.204657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.204667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.204905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.205118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.205129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.205288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.205479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.205489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.205632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.205786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.205796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.205865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.206103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.206113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.206254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.206408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.206418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.206632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.206868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.206878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.207093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.207252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.207262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.207481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.207641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.207651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.207874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.208032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.208041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.208218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.208301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.208311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.208525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.208670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.208681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.208873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.209095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.209105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.209330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.209408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.209418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.209555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.209765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.209774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.209916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.210156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.210170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.210319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.210559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.210569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.210708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.210889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.362 [2024-05-15 08:40:57.210899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.362 qpair failed and we were unable to recover it. 00:29:10.362 [2024-05-15 08:40:57.210984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.211114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.211124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.211313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.211381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.211391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.211592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.211797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.211807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.211953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.212141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.212150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.212348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.212480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.212489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.212696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.212913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.212923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.213064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.213189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.213199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.213428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.213620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.213630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.213709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.213903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.213912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.214102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.214350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.214361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.214497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.214641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.214651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.214864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.215062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.215071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.215205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.215396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.215405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.215539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.215728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.215738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.215936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.216137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.216146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.216295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.216442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.216452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.216578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.216766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.216776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.217007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.217211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.217222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.217308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.217501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.217510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.217731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.217855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.217865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.218078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.218223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.218233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.218390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.218603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.218613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.218804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.218929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.218939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.219083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.219221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.219231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.219367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.219501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.219511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.219724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.219811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.219821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.220039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.220173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.220183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.220444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.220668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.220678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.220829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.220983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.220993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.221176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.221319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.221329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.221504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.221722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.221731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.221870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.222083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.222092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.363 qpair failed and we were unable to recover it. 00:29:10.363 [2024-05-15 08:40:57.222177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.363 [2024-05-15 08:40:57.222301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.222311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.222448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.222651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.222661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.222789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.222844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.222853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.222986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.223150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.223160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.223363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.223611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.223620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.223746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.223961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.223971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.224109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.224365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.224375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.224534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.224757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.224767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.224909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.225099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.225108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.225269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.225487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.225497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.225662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.225890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.225900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.226035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.226280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.226290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.226436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.226655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.226664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.226734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.226930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.226939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.227082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.227279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.227290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.227450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.227661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.227671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.227812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.227938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.227948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.228178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.228310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.228319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.228509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.228654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.228663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.228902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.229043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.229053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.229193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.229328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.229338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.229419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.229566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.229576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.229733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.229866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.229876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.230001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.230148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.230158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.230311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.230437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.230447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.230590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.230793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.230803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.230966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.231124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.231133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.231324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.231405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.231415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.231487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.231711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.231721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.231848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.231931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.231940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.232080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.232218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.232228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.232478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.232700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.232710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.232944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.233137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.233147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.364 qpair failed and we were unable to recover it. 00:29:10.364 [2024-05-15 08:40:57.233341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.233564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.364 [2024-05-15 08:40:57.233574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.233726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.233961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.233971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.234190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.234347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.234357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.234546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.234680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.234689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.234889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.235047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.235075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.235329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.235604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.235634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.235844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.235952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.235982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.236153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.236328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.236358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.236596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.236801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.236831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.237011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.237224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.237254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.237529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.237762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.237772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.237982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.238185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.238216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.238502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.238729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.238767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.238981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.239193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.239203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.239422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.239612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.239641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.239821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.240075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.240104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.240361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.240588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.240618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.240728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.240897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.240926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.241204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.241435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.241464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.241703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.241872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.241901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.242089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.242285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.242315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.242497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.242733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.242761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.242864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.243126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.243155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.243460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.243609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.243618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.243823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.244034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.244044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.244237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.244391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.244400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.244556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.244769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.244778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.244900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.245064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.245074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.245294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.245444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.245474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.245654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.245918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.245948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.246180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.246361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.246390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.246661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.246937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.246965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.247245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.247492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.247521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.247794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.248004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.248013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.248173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.248317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.248326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.365 qpair failed and we were unable to recover it. 00:29:10.365 [2024-05-15 08:40:57.248534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-05-15 08:40:57.248664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.248693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.248966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.249137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.249176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.249454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.249725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.249754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.249985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.250213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.250243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.250472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.250628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.250658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.250837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.251089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.251119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.251424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.251664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.251692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.251918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.252084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.252093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.252310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.252497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.252526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.252757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.252937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.252966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.253082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.253336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.253366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.253472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.253660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.253669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.253885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.253971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.253981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.254194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.254395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.254405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.254612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.254884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.254913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.255147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.255336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.255366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.255616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.255739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.255749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.255886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.256018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.256027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.256202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.256328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.256357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.256593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.256766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.256795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.257053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.257302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.257338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.257531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.257750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.257780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.258011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.258244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.258297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.258487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.258710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.258739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.258944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.259146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.259198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.259439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.259632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.259661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.259787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.260043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.260072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.260271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.260470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.260499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.260699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.260870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.260899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.261031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.261232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.261262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.261492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.261692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.261721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.366 [2024-05-15 08:40:57.261979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.262185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-05-15 08:40:57.262215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.366 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.262441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.262637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.262666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.262922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.263094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.263123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.263370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.263596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.263630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.263919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.264175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.264206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.264462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.264644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.264673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.264933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.265193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.265223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.265482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.265715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.265744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.265951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.266201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.266231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.266400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.266651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.266681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.266937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.267200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.267230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.267463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.267736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.267765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.268006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.268186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.268216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.268443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.268614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.268648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.268849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.269128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.269158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.269303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.269493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.269502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.269709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.269795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.269804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.270038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.270287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.270317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.270490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.270688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.270717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.270843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.271006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.271035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.271210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.271384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.271413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.271622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.271866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.271895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.272075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.272245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.272275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.272525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.272746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.272784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.272968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.273082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.273112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.273354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.273548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.273578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.273690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.273886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.273896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.274133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.274276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.274286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.274446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.274635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.274664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.274905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.275078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.275107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.275309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.275565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.275594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.275820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.276039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.276067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.276348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.276579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.276608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.276857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.277087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.277116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.277379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.277566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.277596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.277846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.278091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.278120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.278395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.278592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.278601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.367 qpair failed and we were unable to recover it. 00:29:10.367 [2024-05-15 08:40:57.278725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.278878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-05-15 08:40:57.278888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.278957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.279101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.279111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.279299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.279505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.279515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.279724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.279875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.279885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.280026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.280163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.280176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.280268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.280471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.280491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.280771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.280863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.280873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.281025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.281170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.281180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.281400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.281673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.281702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.281958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.282137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.282174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.282457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.282582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.282591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.282819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.283044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.283073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.283329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.283516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.283545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.283729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.283983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.284012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.284190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.284417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.284457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.284674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.284879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.284888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.285085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.285337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.285368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.285650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.285878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.285908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.286085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.286315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.286345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.286479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.286709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.286738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.286918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.287125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.287154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.287430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.287706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.287734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.287934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.288173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.288183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.288309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.288466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.288476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.288619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.288843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.288871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.289129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.289244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.289275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.289457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.289618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.289628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.289813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.290077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.290107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.290340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.290596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.290606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.290678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.290756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.290766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.290899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.291108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.291118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.291260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.291396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.291405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.291551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.291687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.291708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.291929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.292172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.292182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.292415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.292568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.292578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.292743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.292931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.292941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.293024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.293232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.368 [2024-05-15 08:40:57.293242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.368 qpair failed and we were unable to recover it. 00:29:10.368 [2024-05-15 08:40:57.293450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.293608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.293638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.293897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.294183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.294214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.294400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.294631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.294660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.294918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.295044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.295053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.295197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.295323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.295332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.295475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.295544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.295554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.295613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.295761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.295771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.295934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.296092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.296121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.296297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.296586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.296615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.296817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.297044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.297074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.297339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.297504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.297533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.297790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.298025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.298054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.298311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.298488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.298497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.298618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.298763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.298792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.299048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.299308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.299339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.299575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.299821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.299850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.300085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.300337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.300367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.300624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.300840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.300850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.301007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.301132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.301141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.301353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.301520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.301529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.301724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.301906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.301935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.302214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.302474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.302503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.302761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.302986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.302995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.303120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.303245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.303255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.303481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.303570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.303579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.303792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.303928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.303937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.304020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.304217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.304247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.304528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.304781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.304811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.305098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.305273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.305303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.305530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.305750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.305780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.306040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.306217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.306247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.306495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.306665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.306694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.306926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.307155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.307201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.307347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.307445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.307474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.307734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.307971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.307981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.308228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.308461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.308471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.369 qpair failed and we were unable to recover it. 00:29:10.369 [2024-05-15 08:40:57.308628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.369 [2024-05-15 08:40:57.308779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.308789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.308915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.309066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.309076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.309210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.309356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.309367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.309555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.309799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.309828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.310098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.310282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.310312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.310539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.310737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.310766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.311025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.311205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.311235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.311523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.311779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.311808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.311996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.312229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.312259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.312513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.312686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.312714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.312896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.313030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.313040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.313204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.313341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.313350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.313492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.313691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.313720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.313903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.314155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.314193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.314419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.314653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.314682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.314960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.315197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.315206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.315399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.315649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.315678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.315863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.316118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.316147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.316407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.316611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.316641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.316919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.317059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.317069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.317232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.317436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.317446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.317654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.317874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.317903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.318148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.318430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.318460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.318714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.318928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.318938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.319159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.319390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.319420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.319656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.319905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.319934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.320194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.320428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.320457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.320644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.320822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.320851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.321028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.321222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.321253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.321516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.321698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.321707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.321898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.322113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.322142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.322387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.322553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.322582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.322778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.322903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.322913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.370 qpair failed and we were unable to recover it. 00:29:10.370 [2024-05-15 08:40:57.323056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.323230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.370 [2024-05-15 08:40:57.323260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-05-15 08:40:57.323530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.323792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.323821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-05-15 08:40:57.324037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.324265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.324295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-05-15 08:40:57.324528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.324758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.324787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-05-15 08:40:57.324982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.325080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.325109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-05-15 08:40:57.325355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.325558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.325586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-05-15 08:40:57.325757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.325940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.325956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-05-15 08:40:57.326128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.326348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.326359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-05-15 08:40:57.326495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.326666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.326676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-05-15 08:40:57.326826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.327035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.327045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-05-15 08:40:57.327241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.327398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.327428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-05-15 08:40:57.327615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.327781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.327818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-05-15 08:40:57.327991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.328132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.328147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-05-15 08:40:57.328255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.328413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.328424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-05-15 08:40:57.328646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.328809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.328838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-05-15 08:40:57.329097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.329354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.329384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-05-15 08:40:57.329631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.329854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.329867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-05-15 08:40:57.330016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.330279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-05-15 08:40:57.330292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.639 [2024-05-15 08:40:57.330493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.330638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.330648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.639 qpair failed and we were unable to recover it. 00:29:10.639 [2024-05-15 08:40:57.330903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.331038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.331047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.639 qpair failed and we were unable to recover it. 00:29:10.639 [2024-05-15 08:40:57.331176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.331302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.331312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.639 qpair failed and we were unable to recover it. 00:29:10.639 [2024-05-15 08:40:57.331553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.331720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.331733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.639 qpair failed and we were unable to recover it. 00:29:10.639 [2024-05-15 08:40:57.331869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.332064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.332074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.639 qpair failed and we were unable to recover it. 00:29:10.639 [2024-05-15 08:40:57.332270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.332399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.332409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.639 qpair failed and we were unable to recover it. 00:29:10.639 [2024-05-15 08:40:57.332632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.332829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.332839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.639 qpair failed and we were unable to recover it. 00:29:10.639 [2024-05-15 08:40:57.333036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.333175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.333185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.639 qpair failed and we were unable to recover it. 00:29:10.639 [2024-05-15 08:40:57.333334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.333409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.333419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.639 qpair failed and we were unable to recover it. 00:29:10.639 [2024-05-15 08:40:57.333640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.639 [2024-05-15 08:40:57.333825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.333835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.334047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.334243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.334253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.334481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.334676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.334686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.334910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.335058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.335068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.335261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.335398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.335412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.335628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.335827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.335837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.335900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.336035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.336045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.336185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.336359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.336369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.336578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.336717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.336727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.336868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.337015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.337025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.337243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.337446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.337456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.337672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.337831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.337861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.337981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.338239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.338269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.338509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.338667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.338677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.338821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.338901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.338910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.339122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.339338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.339368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.339583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.339725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.339754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.339993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.340191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.340221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.340400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.340655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.340684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.640 [2024-05-15 08:40:57.340937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.341209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.640 [2024-05-15 08:40:57.341240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.640 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.341527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.341706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.341735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.341929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.342180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.342210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.342456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.342685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.342714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.342902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.343100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.343128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.343328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.343529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.343558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.343809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.344051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.344080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.344317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.344548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.344577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.344764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.344899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.344908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.345054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.345272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.345282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.345490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.345681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.345691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.345908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.346118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.346128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.346341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.346573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.346602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.346791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.347035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.347045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.347203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.347426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.347456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.347670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.347912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.347941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.348185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.348417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.348447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.348700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.348941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.348951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.349090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.349316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.349326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.349550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.349698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.641 [2024-05-15 08:40:57.349708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.641 qpair failed and we were unable to recover it. 00:29:10.641 [2024-05-15 08:40:57.349861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.350070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.350099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.350383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.350610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.350640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.350880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.351096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.351125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.351380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.351555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.351584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.351769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.351991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.352020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.352193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.352438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.352467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.352686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.352828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.352857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.353117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.353303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.353334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.353506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.353711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.353741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.354002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.354175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.354205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.354444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.354698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.354707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.354870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.355075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.355103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.355299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.355614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.355643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.355907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.356095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.356124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.356300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.356486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.356519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.356609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.356807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.356817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.357061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.357280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.357289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.357435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.357609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.357619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.357758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.357896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.357905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.358176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.358358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.358387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.358504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.358744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.358782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.358983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.359201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.359232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.359360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.359626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.359655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.359929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.360118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.360128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.360348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.360421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.360430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.360637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.360859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.360889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.361182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.361395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.361424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.642 qpair failed and we were unable to recover it. 00:29:10.642 [2024-05-15 08:40:57.361660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.642 [2024-05-15 08:40:57.361895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.361925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.362204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.362473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.362502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.362694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.362890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.362920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.363185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.363418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.363448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.363628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.363811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.363840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.364040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.364293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.364324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.364532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.364708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.364737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.364920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.365097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.365107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.365307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.365535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.365565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.365759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.365985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.365995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.366211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.366363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.366393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.366653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.366912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.366941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.367196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.367436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.367465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.367587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.367780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.367809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.368004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.368208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.368218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.368421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.368646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.368675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.368872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.369076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.369105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.369292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.369528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.369557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.369823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.369948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.369976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.370185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.370315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.370345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.370601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.370708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.370737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.371000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.371225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.371256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.371453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.371632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.371661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.371874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.372010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.372020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.372115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.372323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.372333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.372468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.372615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.372625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.372849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.373000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.373029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.643 [2024-05-15 08:40:57.373242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.373442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.643 [2024-05-15 08:40:57.373472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.643 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.373661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.373847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.373876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.374153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.374382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.374393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.374560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.374710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.374720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.374933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.375119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.375148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.375427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.375731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.375761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.376046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.376310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.376341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.376596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.376828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.376858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.377037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.377227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.377257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.377454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.377666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.377695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.377970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.378186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.378196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.378411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.378655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.378665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.378817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.379038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.379048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.379194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.379337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.379347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.379490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.379616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.379626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.379829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.380018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.380047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.380304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.380500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.380529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.380692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.380896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.380926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.381233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.381510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.381541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.381664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.381917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.381926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.382149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.382386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.382396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.382533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.382609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.382619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.382878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.382962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.382972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.383105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.383259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.383269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.383425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.383556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.383566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.644 qpair failed and we were unable to recover it. 00:29:10.644 [2024-05-15 08:40:57.383760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.644 [2024-05-15 08:40:57.383996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.384025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.384222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.384503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.384533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.384724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.384846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.384876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.385145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.385286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.385297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.385540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.385770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.385798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.386064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.386325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.386356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.386543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.386715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.386736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.386932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.387002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.387012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.387174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.387334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.387345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.387510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.387758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.387768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.387989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.388240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.388270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.388509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.388671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.388701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.388960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.389136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.389173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.389378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.389540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.389569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.389807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.390039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.390068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.390304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.390576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.390605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.390850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.391116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.391126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.391321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.391548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.391559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.391684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.391829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.391839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.391914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.391992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.392002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.392196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.392332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.392342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.392605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.392794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.392823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.393047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.393280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.393311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.393499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.393755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.393784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.393996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.394187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.394217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.394404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.394583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.394612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.645 qpair failed and we were unable to recover it. 00:29:10.645 [2024-05-15 08:40:57.394879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.395138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.645 [2024-05-15 08:40:57.395184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.395389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.395556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.395590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.395870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.396077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.396113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.396310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.396516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.396545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.396811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.397069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.397083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.397152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.397287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.397297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.397516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.397624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.397633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.397804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.397957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.397967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.398047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.398195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.398205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.398423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.398680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.398710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.398888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.399136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.399173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.399431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.399677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.399719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.399895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.400082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.400111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.400331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.400586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.400622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.400843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.401012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.401041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.401246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.401455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.401484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.401761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.401926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.401955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.402150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.402390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.402421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.402673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.402927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.402956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.403161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.403417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.403447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.403628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.403886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.403916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.404151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.404347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.404382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.404571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.404802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.404832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.405094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.405339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.405370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.405615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.405863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.405873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.406015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.406249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.406279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.406488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.406624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.406653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.406915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.407115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.646 [2024-05-15 08:40:57.407144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.646 qpair failed and we were unable to recover it. 00:29:10.646 [2024-05-15 08:40:57.407379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.407557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.407586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.407848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.408089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.408118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.408415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.408697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.408726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.408995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.409248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.409278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.409522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.409718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.409728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.409926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.410084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.410114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.410361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.410618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.410647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.410898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.410979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.410988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.411248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.411534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.411563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.411764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.411960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.411970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.412205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.412382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.412392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.412606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.412751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.412780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.413046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.413262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.413292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.413529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.413714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.413743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.413946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.414183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.414214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.414386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.414624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.414653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.414917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.415044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.415054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.415309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.415552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.415581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.415851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.416105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.416134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.416393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.416579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.416609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.416772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.416929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.416939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.417141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.417309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.417319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.417560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.417740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.417770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.417941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.418195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.418225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.418492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.418776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.418805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.647 qpair failed and we were unable to recover it. 00:29:10.647 [2024-05-15 08:40:57.419045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.647 [2024-05-15 08:40:57.419280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.419311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.419500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.419776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.419806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.419989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.420179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.420209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.420378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.420631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.420660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.420850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.421110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.421139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.421319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.421609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.421647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.421855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.421997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.422026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.422285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.422456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.422485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.422746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.423013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.423042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.423311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.423524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.423534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.423729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.423946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.423976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.424185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.424372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.424401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.424666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.424851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.424881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.425142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.425311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.425322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.425453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.425634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.425664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.425854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.426090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.426120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.426315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.426633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.426662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.426837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.426979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.426989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.427188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.427368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.427398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.427670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.427950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.427979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.428154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.428345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.428376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.428563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.428798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.428828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.429016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.429097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.429108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.429237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.429460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.429471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.429614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.429826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.429837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.429913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.430050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.430061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.430281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.430413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.430424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.430587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.430728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.648 [2024-05-15 08:40:57.430739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.648 qpair failed and we were unable to recover it. 00:29:10.648 [2024-05-15 08:40:57.430869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.431009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.431020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.431187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.431383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.431394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.431534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.431783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.431792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.431955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.432083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.432093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.432239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.432462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.432472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.432700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.432835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.432845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.433086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.433315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.433326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.433549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.433733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.433744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.433942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.434153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.434163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.434390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.434487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.434497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.434718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.434939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.434949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.435138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.435329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.435340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.435538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.435670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.435680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.435892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.436064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.436073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.436295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.436514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.436524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.436721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.436862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.436872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.437092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.437335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.437345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.437565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.437800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.437810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.437984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.438183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.438194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.438276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.438430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.438440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.649 qpair failed and we were unable to recover it. 00:29:10.649 [2024-05-15 08:40:57.438577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.438714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.649 [2024-05-15 08:40:57.438724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.438922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.439048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.439058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.439234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.439381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.439391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.439527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.439690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.439700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.439923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.440146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.440155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.440350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.440463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.440480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.440662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.440799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.440813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.441033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.441279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.441294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.441490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.441641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.441654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.441857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.442109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.442122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.442296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.442526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.442541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.442762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.442850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.442864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.443036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.443253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.443267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.443466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.443619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.443632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.443887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.444032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.444046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.444132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.444356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.444370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.444593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.444819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.444833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.445035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.445175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.445189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.445452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.445675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.445688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.445945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.446104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.446117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.446353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.446511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.446525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.446734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.446886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.446900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.447131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.447274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.447289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.447438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.447585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.447599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.650 qpair failed and we were unable to recover it. 00:29:10.650 [2024-05-15 08:40:57.447801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.650 [2024-05-15 08:40:57.448003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.448016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.448151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.448384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.448398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.448548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.448764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.448778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.449011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.449114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.449128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.449289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.449436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.449449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.449670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.449826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.449839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.450061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.450298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.450312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.450484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.450659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.450672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.450827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.450976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.450989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.451208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.451356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.451370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.451477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.451652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.451665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.451836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.452041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.452055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.452281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.452498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.452512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.452740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.452841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.452855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.452941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.453142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.453155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.453295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.453462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.453476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.453609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.453823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.453837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.454088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.454263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.454277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.454479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.454628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.454642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.454848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.455093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.455107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.455356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.455510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.455524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.455756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.455900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.455914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.456116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.456329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.456343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.456587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.456850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.456864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.456956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.457203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.457217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.457425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.457661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.457675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.457832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.458061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.458075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.651 qpair failed and we were unable to recover it. 00:29:10.651 [2024-05-15 08:40:57.458220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.458356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.651 [2024-05-15 08:40:57.458372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.458519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.458739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.458752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.458950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.459099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.459113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.459200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.459348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.459361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.459442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.459654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.459667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.459758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.459901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.459914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.460145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.460234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.460248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.460498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.460662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.460675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.460827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.461051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.461065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.461209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.461472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.461485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.461733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.461892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.461908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.462134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.462241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.462256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.462479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.462677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.462690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.462906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.463105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.463119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.463289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.463491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.463504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.463733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.463822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.463835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.464028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.464176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.464190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.464360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.464439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.464453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.464602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.464698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.464711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.464882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.465083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.465097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.465266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.465513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.465530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.465732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.465883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.465897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.466069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.466202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.466216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.652 [2024-05-15 08:40:57.466488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.466587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.652 [2024-05-15 08:40:57.466601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.652 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.466750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.466950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.466964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.467113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.467356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.467371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.467520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.467665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.467678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.467816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.467947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.467961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.468100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.468254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.468268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.468358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.468504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.468517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.468663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.468878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.468893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.469056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.469155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.469175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.469322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.469436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.469450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.469670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.469818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.469832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.470062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.470279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.470293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.470373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.470574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.470587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.470819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.470911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.470925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.471071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.471228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.471242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.471446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.471668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.471681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.471892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.472114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.472128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.472299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.472453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.472467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.472615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.472703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.472716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.653 [2024-05-15 08:40:57.472940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.473173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.653 [2024-05-15 08:40:57.473187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.653 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.473379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.473526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.473540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.473694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.473834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.473847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.473923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.474071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.474084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.474230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.474406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.474420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.474588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.474729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.474742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.474822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.474982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.474995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.475149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.475345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.475360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.475429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.475565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.475579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.475719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.475955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.475969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.476038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.476238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.476252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.476504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.476662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.476675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.476875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.477074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.477088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.477233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.477377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.477390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.477614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.477786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.477799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.477950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.478098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.478111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.478253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.478451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.478465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.478608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.478767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.478781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.479027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.479172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.479185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.654 [2024-05-15 08:40:57.479322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.479541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.654 [2024-05-15 08:40:57.479554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.654 qpair failed and we were unable to recover it. 00:29:10.655 [2024-05-15 08:40:57.479708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.479928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.479942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.655 qpair failed and we were unable to recover it. 00:29:10.655 [2024-05-15 08:40:57.480169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.480302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.480316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.655 qpair failed and we were unable to recover it. 00:29:10.655 [2024-05-15 08:40:57.480479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.480686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.480699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.655 qpair failed and we were unable to recover it. 00:29:10.655 [2024-05-15 08:40:57.480915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.481137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.481150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.655 qpair failed and we were unable to recover it. 00:29:10.655 [2024-05-15 08:40:57.481340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.481566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.481580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.655 qpair failed and we were unable to recover it. 00:29:10.655 [2024-05-15 08:40:57.481807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.482069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.482082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.655 qpair failed and we were unable to recover it. 00:29:10.655 [2024-05-15 08:40:57.482250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.482489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.482502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.655 qpair failed and we were unable to recover it. 00:29:10.655 [2024-05-15 08:40:57.482567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.482708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.482721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.655 qpair failed and we were unable to recover it. 00:29:10.655 [2024-05-15 08:40:57.482888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.483028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.483042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.655 qpair failed and we were unable to recover it. 00:29:10.655 [2024-05-15 08:40:57.483128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.483326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.483340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.655 qpair failed and we were unable to recover it. 00:29:10.655 [2024-05-15 08:40:57.483494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.483738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.483751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.655 qpair failed and we were unable to recover it. 00:29:10.655 [2024-05-15 08:40:57.483979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.484143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.484157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.655 qpair failed and we were unable to recover it. 00:29:10.655 [2024-05-15 08:40:57.484398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.484605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.484618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.655 qpair failed and we were unable to recover it. 00:29:10.655 [2024-05-15 08:40:57.484895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.485052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.485065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.655 qpair failed and we were unable to recover it. 00:29:10.655 [2024-05-15 08:40:57.485331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.485558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.485572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.655 qpair failed and we were unable to recover it. 00:29:10.655 [2024-05-15 08:40:57.485670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.485903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.485916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.655 qpair failed and we were unable to recover it. 00:29:10.655 [2024-05-15 08:40:57.486130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.655 [2024-05-15 08:40:57.486368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.486383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.486479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.486701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.486715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.486871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.486951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.486964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.487233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.487444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.487457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.487559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.487785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.487798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.488008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.488297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.488311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.488560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.488784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.488798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.489054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.489290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.489304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.489457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.489605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.489618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.489716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.489848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.489861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.489935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.490019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.490033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.490185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.490343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.490356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.490459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.490593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.490606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.490767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.490969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.490982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.491218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.491436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.491450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.491550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.491762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.491775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.491975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.492175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.492189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.492333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.492530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.492544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.492677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.492846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.492859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.656 qpair failed and we were unable to recover it. 00:29:10.656 [2024-05-15 08:40:57.493023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.656 [2024-05-15 08:40:57.493238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.493252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.493481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.493626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.493639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.493886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.494106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.494120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.494214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.494439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.494452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.494679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.494909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.494922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.494989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.495135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.495149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.495333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.495581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.495595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.495841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.495927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.495940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.496188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.496338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.496352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.496545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.496764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.496778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.497026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.497239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.497253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.497429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.497654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.497668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.497820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.498052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.498066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.498278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.498430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.498444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.498598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.498729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.498742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.498968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.499173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.499187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.499413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.499651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.499665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.499814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.500032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.500045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.500189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.500431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.657 [2024-05-15 08:40:57.500444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.657 qpair failed and we were unable to recover it. 00:29:10.657 [2024-05-15 08:40:57.500590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.500790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.500803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.500961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.501116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.501130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.501274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.501517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.501531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.501676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.501806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.501820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.501917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.502140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.502153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.502385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.502619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.502633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.502712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.502851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.502864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.503090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.503249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.503263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.503410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.503610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.503623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.503757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.503905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.503918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.504051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.504193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.504207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.504376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.504591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.504604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.504818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.505047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.505061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.505290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.505434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.505448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.505693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.505835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.505848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.506056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.506211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.506225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.506377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.506578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.506591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.506820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.506998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.507011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.507188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.507332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.658 [2024-05-15 08:40:57.507346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.658 qpair failed and we were unable to recover it. 00:29:10.658 [2024-05-15 08:40:57.507491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.507747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.507760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.507984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.508205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.508219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.508444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.508664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.508678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.508910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.509054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.509067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.509242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.509440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.509454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.509598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.509846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.509859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.510066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.510248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.510261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.510415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.510564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.510577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.510736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.510905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.510918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.511074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.511300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.511314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.511400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.511492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.511506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.511728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.511879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.511892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.512160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.512316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.512329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.512481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.512678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.512692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.512896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.513039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.513052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.513301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.513469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.513483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.513701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.513847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.513863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.513942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.514146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.514159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.514411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.514584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.659 [2024-05-15 08:40:57.514597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.659 qpair failed and we were unable to recover it. 00:29:10.659 [2024-05-15 08:40:57.514765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.515019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.515032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.660 [2024-05-15 08:40:57.515123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.515380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.515394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.660 [2024-05-15 08:40:57.515538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.515766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.515780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.660 [2024-05-15 08:40:57.516003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.516242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.516256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.660 [2024-05-15 08:40:57.516520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.516651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.516665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.660 [2024-05-15 08:40:57.516817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.517015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.517029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.660 [2024-05-15 08:40:57.517174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.517376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.517390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.660 [2024-05-15 08:40:57.517540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.517708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.517724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.660 [2024-05-15 08:40:57.517869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.518025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.518038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.660 [2024-05-15 08:40:57.518243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.518409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.518423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.660 [2024-05-15 08:40:57.518648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.518837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.518851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.660 [2024-05-15 08:40:57.519073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.519215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.519230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.660 [2024-05-15 08:40:57.519456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.519631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.519644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.660 [2024-05-15 08:40:57.519802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.519947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.519961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.660 [2024-05-15 08:40:57.520106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.520255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.520269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.660 [2024-05-15 08:40:57.520403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.520608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.520621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.660 [2024-05-15 08:40:57.520822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.521025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.521039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.660 [2024-05-15 08:40:57.521246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.521390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.521406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.660 [2024-05-15 08:40:57.521560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.521780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.660 [2024-05-15 08:40:57.521794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.660 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.521926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.522123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.522136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.661 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.522336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.522414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.522428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.661 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.522633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.522719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.522732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.661 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.522887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.523184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.523199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.661 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.523370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.523590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.523603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.661 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.523735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.523865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.523878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.661 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.524050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.524247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.524262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.661 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.524433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.524669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.524683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.661 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.524839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.525009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.525025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.661 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.525173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.525316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.525329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.661 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.525559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.525769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.525783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.661 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.525934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.526064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.526078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.661 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.526303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.526500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.526513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.661 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.526666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.526799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.526813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.661 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.526946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.527082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.527095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.661 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.527258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.527484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.527497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.661 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.527574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.527809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.527823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.661 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.527906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.528103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.528116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.661 qpair failed and we were unable to recover it. 00:29:10.661 [2024-05-15 08:40:57.528326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.661 [2024-05-15 08:40:57.528504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.528517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.662 qpair failed and we were unable to recover it. 00:29:10.662 [2024-05-15 08:40:57.528677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.528811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.528824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.662 qpair failed and we were unable to recover it. 00:29:10.662 [2024-05-15 08:40:57.529056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.529138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.529152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.662 qpair failed and we were unable to recover it. 00:29:10.662 [2024-05-15 08:40:57.529385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.529470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.529483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.662 qpair failed and we were unable to recover it. 00:29:10.662 [2024-05-15 08:40:57.529695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.529774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.529787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.662 qpair failed and we were unable to recover it. 00:29:10.662 [2024-05-15 08:40:57.529990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.530186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.530200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.662 qpair failed and we were unable to recover it. 00:29:10.662 [2024-05-15 08:40:57.530341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.530537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.530551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.662 qpair failed and we were unable to recover it. 00:29:10.662 [2024-05-15 08:40:57.530751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.530900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.530913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.662 qpair failed and we were unable to recover it. 00:29:10.662 [2024-05-15 08:40:57.531064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.531236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.531250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.662 qpair failed and we were unable to recover it. 00:29:10.662 [2024-05-15 08:40:57.531400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.531602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.531615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.662 qpair failed and we were unable to recover it. 00:29:10.662 [2024-05-15 08:40:57.531814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.531959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.531973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.662 qpair failed and we were unable to recover it. 00:29:10.662 [2024-05-15 08:40:57.532111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.532277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.532291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.662 qpair failed and we were unable to recover it. 00:29:10.662 [2024-05-15 08:40:57.532491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.532707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.532720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.662 qpair failed and we were unable to recover it. 00:29:10.662 [2024-05-15 08:40:57.532959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.533090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.533103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.662 qpair failed and we were unable to recover it. 00:29:10.662 [2024-05-15 08:40:57.533186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.533384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.662 [2024-05-15 08:40:57.533397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.662 qpair failed and we were unable to recover it. 00:29:10.662 [2024-05-15 08:40:57.533636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.533777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.533791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.533951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.534152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.534171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.534398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.534620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.534633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.534811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.534972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.534985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.535206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.535409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.535422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.535649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.535862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.535876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.536029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.536175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.536189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.536403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.536501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.536514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.536680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.536845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.536858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.536939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.537020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.537033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.537258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.537408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.537422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.537577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.537726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.537739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.537952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.538150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.538168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.538397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.538620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.538634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.538801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.539022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.539035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.539193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.539417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.539430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.539643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.539841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.539855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.540078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.540153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.540172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.540382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.540526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.540539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.540756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.540966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.540979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.663 qpair failed and we were unable to recover it. 00:29:10.663 [2024-05-15 08:40:57.541194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.663 [2024-05-15 08:40:57.541342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.541355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.541519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.541597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.541610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.541759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.541986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.541999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.542218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.542442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.542456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.542634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.542854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.542867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.543074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.543289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.543303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.543530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.543753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.543766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.543994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.544176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.544190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.544442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.544669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.544682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.544817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.545038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.545051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.545219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.545427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.545440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.545605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.545829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.545842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.546039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.546240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.546254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.546344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.546486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.546500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.546698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.546923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.546937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.547153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.547360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.547374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.547601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.547748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.547761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.547936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.548023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.548036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.548182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.548323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.548337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.548561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.548715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.548728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.548893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.549117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.549130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.549338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.549415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.549428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.549648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.549843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.549857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.550072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.550241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.550272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.550475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.550719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.550747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.550988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.551184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.551215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.551386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.551632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.551661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.551838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.552013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.664 [2024-05-15 08:40:57.552042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.664 qpair failed and we were unable to recover it. 00:29:10.664 [2024-05-15 08:40:57.552281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.552400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.552430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.552608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.552836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.552866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.553144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.553332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.553361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.553599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.553849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.553878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.554135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.554357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.554388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.554616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.554867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.554897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.555163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.555436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.555465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.555695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.555925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.555954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.556137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.556286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.556318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.556556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.556813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.556842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.557091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.557263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.557293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.557480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.557700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.557730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.557922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.558181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.558212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.558443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.558697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.558727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.558983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.559208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.559239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.559431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.559657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.559686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.559941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.560182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.560213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.560392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.560618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.560647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.560910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.561091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.561121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.561361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.561654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.561683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.561925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.562095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.562124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.562245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.562326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.562339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.562421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.562670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.562683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.562896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.563059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.563088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.665 [2024-05-15 08:40:57.563277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.563556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.665 [2024-05-15 08:40:57.563584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.665 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.563839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.563949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.563978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.564255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.564527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.564556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.564727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.564888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.564918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.565080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.565249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.565263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.565488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.565755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.565768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.565993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.566213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.566226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.566421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.566673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.566702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.566913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.567140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.567197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.567430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.567677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.567690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.567838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.568061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.568090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.568252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.568504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.568533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.568782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.568873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.568886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.569049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.569217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.569230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.569430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.569585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.569614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.569846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.570105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.570134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.570275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.570496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.570510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.570650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.570741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.570754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.570888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.571117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.571131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.571220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.571306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.571319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.571548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.571741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.571754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.571830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.572026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.572039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.572279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.572506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.572536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.572724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.572905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.572934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.666 qpair failed and we were unable to recover it. 00:29:10.666 [2024-05-15 08:40:57.573189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.666 [2024-05-15 08:40:57.573416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.573430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.573650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.573748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.573761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.573898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.574064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.574094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.574374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.574651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.574664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.574800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.575024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.575037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.575198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.575337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.575351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.575563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.575755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.575784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.576053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.576233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.576264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.576432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.576540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.576570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.576703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.576891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.576920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.577034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.577282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.577318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.577487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.577700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.577729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.577973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.578178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.578209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.578402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.578589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.578618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.578879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.579140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.579209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.579468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.579696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.579725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.579899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.580179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.580210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.580443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.580616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.580646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.580903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.581098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.581128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.581397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.581602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.581615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.581850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.582052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.582086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.582342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.582521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.582551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.582783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.583011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.583040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.583305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.583531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.583544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.583716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.583937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.583966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.667 qpair failed and we were unable to recover it. 00:29:10.667 [2024-05-15 08:40:57.584139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.667 [2024-05-15 08:40:57.584350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.584380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.668 qpair failed and we were unable to recover it. 00:29:10.668 [2024-05-15 08:40:57.584631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.584857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.584886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.668 qpair failed and we were unable to recover it. 00:29:10.668 [2024-05-15 08:40:57.585119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.585303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.585316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.668 qpair failed and we were unable to recover it. 00:29:10.668 [2024-05-15 08:40:57.585477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.585642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.585678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.668 qpair failed and we were unable to recover it. 00:29:10.668 [2024-05-15 08:40:57.585932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.586110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.586139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.668 qpair failed and we were unable to recover it. 00:29:10.668 [2024-05-15 08:40:57.586312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.586559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.586594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.668 qpair failed and we were unable to recover it. 00:29:10.668 [2024-05-15 08:40:57.586848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.587084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.587113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.668 qpair failed and we were unable to recover it. 00:29:10.668 [2024-05-15 08:40:57.587309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.587478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.587507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.668 qpair failed and we were unable to recover it. 00:29:10.668 [2024-05-15 08:40:57.587761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.587948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.587977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.668 qpair failed and we were unable to recover it. 00:29:10.668 [2024-05-15 08:40:57.588097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.588188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.588202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.668 qpair failed and we were unable to recover it. 00:29:10.668 [2024-05-15 08:40:57.588404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.588560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.588589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.668 qpair failed and we were unable to recover it. 00:29:10.668 [2024-05-15 08:40:57.588706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.588966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.588994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.668 qpair failed and we were unable to recover it. 00:29:10.668 [2024-05-15 08:40:57.589181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.589295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.589324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.668 qpair failed and we were unable to recover it. 00:29:10.668 [2024-05-15 08:40:57.589533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.589754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.589800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.668 qpair failed and we were unable to recover it. 00:29:10.668 [2024-05-15 08:40:57.590059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.590182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.590213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.668 qpair failed and we were unable to recover it. 00:29:10.668 [2024-05-15 08:40:57.590475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.590695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.590711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.668 qpair failed and we were unable to recover it. 00:29:10.668 [2024-05-15 08:40:57.590948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.591087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.591100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.668 qpair failed and we were unable to recover it. 00:29:10.668 [2024-05-15 08:40:57.591324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.591548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.591577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.668 qpair failed and we were unable to recover it. 00:29:10.668 [2024-05-15 08:40:57.591858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.592040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.668 [2024-05-15 08:40:57.592069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.592343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.592477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.592490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.592756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.593005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.593034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.593158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.593415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.593444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.593621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.593808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.593837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.594022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.594230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.594260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.594532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.594761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.594774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.594994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.595215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.595229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.595517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.595744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.595773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.595997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.596193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.596223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.596480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.596581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.596594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.596740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.596938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.596951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.597083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.597306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.597339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.597585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.597812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.597842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.598075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.598285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.598316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.598496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.598674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.598703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.598880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.599008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.599037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.599173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.599340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.599370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.599610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.599791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.599819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.599991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.600186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.600217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.600452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.600642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.600671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.600903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.601133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.601162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.601398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.601564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.601594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.601768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.601968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.601997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.669 [2024-05-15 08:40:57.602231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.602467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.669 [2024-05-15 08:40:57.602496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.669 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.602675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.602862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.602891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.603152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.603460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.603489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.603702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.603857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.603871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.604132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.604403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.604434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.604686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.604839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.604869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.605038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.605221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.605252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.605429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.605519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.605533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.605750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.605950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.605979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.606256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.606517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.606546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.606822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.607052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.607081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.607283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.607469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.607498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.607777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.607928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.607942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.608193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.608422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.608451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.608709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.608985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.609015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.609277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.609409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.609438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.609712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.609988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.610017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.610284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.610543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.610572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.610694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.610954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.610983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.611239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.611413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.611443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.611625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.611865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.611894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.612130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.612329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.612359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.612591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.612740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.612753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.612931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.613000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.613013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.613194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.613343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.613372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.613539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.613813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.613842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.614088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.614327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.614357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.670 [2024-05-15 08:40:57.614631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.614866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.670 [2024-05-15 08:40:57.614895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.670 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.615081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.615250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.615280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.615535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.615808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.615838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.616099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.616281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.616311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.616564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.616759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.616772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.616927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.617072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.617085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.617304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.617474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.617487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.617718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.617995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.618024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.618289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.618544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.618573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.618752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.618939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.618968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.619231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.619413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.619452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.619585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.619683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.619696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.619795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.619996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.620010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.620212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.620423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.620453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.620639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.620811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.620840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.620966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.621217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.621248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.621446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.621718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.621732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.621937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.622176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.622190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.622294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.622517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.622531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.622612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.622755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.622768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.622849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.623071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.623084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.623223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.623429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.623442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.623586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.623656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.623669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.623833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.623965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.623978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.624237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.624481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.624494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.624719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.624945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.624973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.625211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.625440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.625469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.625659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.625857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.625887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.671 [2024-05-15 08:40:57.626144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.626343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.671 [2024-05-15 08:40:57.626373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.671 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.626513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.626761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.626774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.626994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.627141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.627155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.627424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.627611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.627640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.627921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.628085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.628114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.628330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.628532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.628561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.628797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.628962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.628991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.629186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.629320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.629349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.629618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.629815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.629844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.630014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.630253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.630283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.630471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.630635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.630664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.630926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.631209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.631239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.631496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.631650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.631680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.631935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.632121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.632150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.632289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.632493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.632522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.632692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.632913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.632942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.633062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.633235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.633266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.633451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.633724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.633754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.633992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.634240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.634271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.634449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.634627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.634657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.634893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.635126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.635155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.635342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.635550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.635579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.635789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.636046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.636075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.636269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.636450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.636480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.636601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.636891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.636904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.637150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.637383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.637397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.637547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.637680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.637693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.637947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.638098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.638127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.672 qpair failed and we were unable to recover it. 00:29:10.672 [2024-05-15 08:40:57.638327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.638517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.672 [2024-05-15 08:40:57.638546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.638747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.638984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.639014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.639214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.639524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.639553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.639775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.639952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.639982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.640220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.640477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.640507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.640762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.640945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.640974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.641143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.641406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.641436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.641629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.641729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.641758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.642030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.642226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.642257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.642441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.642696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.642725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.642963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.643144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.643185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.643373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.643639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.643668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.643945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.644125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.644154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.644364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.644577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.644606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.644839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.645102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.645132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.645379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.645518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.645548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.645751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.645927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.645959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.646163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.646431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.646461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.646596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.646858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.646888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.647138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.647341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.647372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.647663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.647937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.647966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.648136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.648382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.648418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.648557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.648747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.648776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.673 qpair failed and we were unable to recover it. 00:29:10.673 [2024-05-15 08:40:57.649028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.649207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.673 [2024-05-15 08:40:57.649238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.674 qpair failed and we were unable to recover it. 00:29:10.674 [2024-05-15 08:40:57.649428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.674 [2024-05-15 08:40:57.649544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.674 [2024-05-15 08:40:57.649558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.674 qpair failed and we were unable to recover it. 00:29:10.674 [2024-05-15 08:40:57.649638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.674 [2024-05-15 08:40:57.649751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.674 [2024-05-15 08:40:57.649764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.674 qpair failed and we were unable to recover it. 00:29:10.674 [2024-05-15 08:40:57.649925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.674 [2024-05-15 08:40:57.650087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.674 [2024-05-15 08:40:57.650117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.674 qpair failed and we were unable to recover it. 00:29:10.674 [2024-05-15 08:40:57.650364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.674 [2024-05-15 08:40:57.650641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.674 [2024-05-15 08:40:57.650655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.674 qpair failed and we were unable to recover it. 00:29:10.674 [2024-05-15 08:40:57.650901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.674 [2024-05-15 08:40:57.651056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.674 [2024-05-15 08:40:57.651070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.674 qpair failed and we were unable to recover it. 00:29:10.674 [2024-05-15 08:40:57.651229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.674 [2024-05-15 08:40:57.651433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.674 [2024-05-15 08:40:57.651447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.674 qpair failed and we were unable to recover it. 00:29:10.674 [2024-05-15 08:40:57.651539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.674 [2024-05-15 08:40:57.651607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.674 [2024-05-15 08:40:57.651620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.674 qpair failed and we were unable to recover it. 00:29:10.674 [2024-05-15 08:40:57.651773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.674 [2024-05-15 08:40:57.652015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.674 [2024-05-15 08:40:57.652032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.674 qpair failed and we were unable to recover it. 00:29:10.957 [2024-05-15 08:40:57.652248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.957 [2024-05-15 08:40:57.652404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.957 [2024-05-15 08:40:57.652419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.957 qpair failed and we were unable to recover it. 00:29:10.957 [2024-05-15 08:40:57.652527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.957 [2024-05-15 08:40:57.652774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.957 [2024-05-15 08:40:57.652787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.957 qpair failed and we were unable to recover it. 00:29:10.957 [2024-05-15 08:40:57.652940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.957 [2024-05-15 08:40:57.653140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.957 [2024-05-15 08:40:57.653153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.957 qpair failed and we were unable to recover it. 00:29:10.957 [2024-05-15 08:40:57.653268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.957 [2024-05-15 08:40:57.653484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.957 [2024-05-15 08:40:57.653498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.957 qpair failed and we were unable to recover it. 00:29:10.957 [2024-05-15 08:40:57.653748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.957 [2024-05-15 08:40:57.653922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.957 [2024-05-15 08:40:57.653936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.957 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.654170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.654379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.654393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.654490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.654624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.654638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.654744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.654878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.654892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.655046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.655192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.655207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.655444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.655599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.655616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.655755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.655987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.656001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.656231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.656413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.656443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.656562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.656741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.656770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.657004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.657103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.657117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.657414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.657650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.657680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.657931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.658133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.658148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.658290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.658493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.658507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.658653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.658897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.658911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.658996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.659160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.659204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.659447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.659651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.659685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.659950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.660135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.660191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.660426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.660690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.660719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.660940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.661123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.661152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.661343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.661544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.661573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.661685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.661869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.661883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.661970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.662189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.662210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.662376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.662582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.662612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.662824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.663044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.663074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.663300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.663502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.663532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.663796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.664036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.664065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.664266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.664457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.664486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.664654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.664883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.664913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.958 [2024-05-15 08:40:57.665114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.665347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-05-15 08:40:57.665379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.958 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.665643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.665874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.665887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.666073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.666259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.666289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.666478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.666681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.666711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.666972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.667240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.667270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.667441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.667637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.667666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.667918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.668070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.668084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.668265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.668445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.668474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.668669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.668876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.668906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.669087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.669322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.669353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.669472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.669634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.669648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.669860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.669932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.669945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.670163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.670466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.670498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.670786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.670956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.670987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.671183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.671418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.671448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.671715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.671985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.672014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.672282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.672544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.672574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.672749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.672945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.672974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.673174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.673434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.673463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.673588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.673824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.673854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.674026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.674280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.674313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.674446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.674635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.674665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.674858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.675122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.675152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.675334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.675540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.675570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.675795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.675980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.676009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.676204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.676382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.676412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.676631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.676806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.676836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.677038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.677274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.677304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.959 qpair failed and we were unable to recover it. 00:29:10.959 [2024-05-15 08:40:57.677581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.677771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-05-15 08:40:57.677800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.678039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.678321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.678352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.678469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.678644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.678657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.678773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.678919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.678933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.679089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.679177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.679191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.679340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.679484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.679497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.679706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.679869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.679882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.680033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.680186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.680201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.680392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.680548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.680562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.680773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.681013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.681028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.681179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.681342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.681356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.681589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.681736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.681749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.682013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.682264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.682278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.682446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.682598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.682613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.682768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.682911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.682925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.683132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.683313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.683328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.683465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.683613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.683627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.683844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.684049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.684063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.684258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.684349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.684363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.684469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.684643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.684657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.684849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.685045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.685059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.685223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.685428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.685441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.685646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.685837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.685851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.686010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.686246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.686260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.686491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.686675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.686688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.686838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.687043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.687057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.687313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.687544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.687558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.687741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.687916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.687930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.960 qpair failed and we were unable to recover it. 00:29:10.960 [2024-05-15 08:40:57.688105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.688247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.960 [2024-05-15 08:40:57.688261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.688364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.688578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.688594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.688688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.688908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.688921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.689154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.689327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.689341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.689558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.689777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.689790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.689893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.690029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.690044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.690251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.690404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.690418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.690589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.690818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.690832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.690984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.691217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.691235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.691346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.691581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.691595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.691755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.691956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.691970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.692141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.692388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.692402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.692566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.692721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.692735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.692890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.693118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.693132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.693303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.693399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.693412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.693619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.693861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.693875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.694043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.694203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.694218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.694426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.694633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.694647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.694900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.695053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.695068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.695142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.695373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.695388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.695479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.695636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.695650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.695820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.695909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.695922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.696152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.696243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.696257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.696407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.696557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.696572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.696707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 453338 Killed "${NVMF_APP[@]}" "$@" 00:29:10.961 [2024-05-15 08:40:57.696886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.696900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.697051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.697332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.697346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.697457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 08:40:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:29:10.961 [2024-05-15 08:40:57.697566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.697580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.961 qpair failed and we were unable to recover it. 00:29:10.961 [2024-05-15 08:40:57.697734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 08:40:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:10.961 [2024-05-15 08:40:57.697977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.961 [2024-05-15 08:40:57.697991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.698151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 08:40:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:10.962 [2024-05-15 08:40:57.698268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.698283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 08:40:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:10.962 [2024-05-15 08:40:57.698508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.698643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.698658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 08:40:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.962 [2024-05-15 08:40:57.698940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.699089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.699103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.699295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.699465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.699479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.699660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.699846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.699859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.700111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.700277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.700292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.700383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.700545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.700558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.700670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.700854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.700869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.701124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.701264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.701278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.701479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.701563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.701577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.701739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.701968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.701981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.702156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.702331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.702345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.702487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.702637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.702653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.702827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.703004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.703019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.703156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.703272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.703286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.703523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.703595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.703609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.703758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.703984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.703998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.704130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.704360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.704375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.704583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.704730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.704744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.704969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 08:40:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=454079 00:29:10.962 [2024-05-15 08:40:57.705175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.705190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.962 [2024-05-15 08:40:57.705367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 08:40:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 454079 00:29:10.962 [2024-05-15 08:40:57.705547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.962 [2024-05-15 08:40:57.705561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.962 qpair failed and we were unable to recover it. 00:29:10.963 08:40:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:10.963 [2024-05-15 08:40:57.705665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 08:40:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 454079 ']' 00:29:10.963 [2024-05-15 08:40:57.705823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.705838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.705927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.706025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.706039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 08:40:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.963 [2024-05-15 08:40:57.706196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 08:40:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:10.963 [2024-05-15 08:40:57.706413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.706428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.706609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 08:40:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.963 [2024-05-15 08:40:57.706796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.706810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.706891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 08:40:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:10.963 [2024-05-15 08:40:57.707042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.707056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 08:40:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.963 [2024-05-15 08:40:57.707201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.707375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.707390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.707625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.707730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.707744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.707901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.708133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.708150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.708264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.708487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.708501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.708731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.708899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.708910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.709159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.709392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.709402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.709563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.709736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.709747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.709879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.710097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.710108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.710303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.710440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.710451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.710604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.710819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.710829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.711061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.711154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.711168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.711313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.711511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.711522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.711607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.711695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.711705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.711880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.712094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.712105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.712241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.712459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.712469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.712672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.712939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.712949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.713096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.713302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.713313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.713414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.713633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.713645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.713847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.713984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.713995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.714152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.714296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.963 [2024-05-15 08:40:57.714307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.963 qpair failed and we were unable to recover it. 00:29:10.963 [2024-05-15 08:40:57.714455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.714699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.714709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.714868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.715068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.715077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.715216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.715366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.715376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.715489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.715637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.715647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.715872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.716032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.716043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.716178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.716265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.716274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.716427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.716501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.716511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.716728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.716962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.716972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.717071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.717298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.717308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.717560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.717642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.717652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.717800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.718001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.718012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.718230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.718323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.718333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.718411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.718488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.718499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.718630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.718773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.718786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.718962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.719030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.719039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.719120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.719200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.719211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.719400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.719489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.719498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.719695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.719850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.719859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.719926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.720089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.720098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.720288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.720432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.720442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.720531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.720672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.720682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.720756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.720853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.720863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.721012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.721258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.721268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.721349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.721526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.721539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.721674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.721744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.721754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.721935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.722072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.722082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.722250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.722398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.722407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.722484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.722656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.964 [2024-05-15 08:40:57.722666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.964 qpair failed and we were unable to recover it. 00:29:10.964 [2024-05-15 08:40:57.722901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.723072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.723083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.723172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.723375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.723385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.723613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.723865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.723874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.724069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.724205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.724215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.724295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.724371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.724381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.724517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.724711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.724727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.724984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.725141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.725153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.725237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.725386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.725397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.725477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.725609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.725618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.725709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.725851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.725861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.725947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.726085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.726095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.726235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.726367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.726377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.726548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.726633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.726643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.726798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.726882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.726892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.726985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.727125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.727135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.727221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.727293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.727305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.727446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.727599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.727609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.727758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.727996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.728006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.728145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.728286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.728297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.728369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.728513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.728523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.728653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.728731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.728740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.728900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.729139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.729149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.729285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.729481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.729491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.729688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.729936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.729945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.730072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.730215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.730225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.730383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.730510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.730521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.730604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.730751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.730761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.730965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.731057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.965 [2024-05-15 08:40:57.731066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.965 qpair failed and we were unable to recover it. 00:29:10.965 [2024-05-15 08:40:57.731141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.731343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.731353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.731453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.731629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.731639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.731878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.732019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.732029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.732156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.732342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.732354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.732456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.732620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.732629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.732708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.732857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.732867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.733009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.733081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.733091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.733187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.733268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.733278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.733348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.733435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.733445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.733510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.733601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.733611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.733690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.733762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.733771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.733830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.733904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.733914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.733992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.734051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.734061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.734195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.734327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.734337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.734408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.734490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.734500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.734566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.734629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.734639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.734728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.734806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.734815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.734883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.734956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.734966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.735045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.735124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.735134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.735223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.735308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.735320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.735383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.735458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.735469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.735535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.735599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.735609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.735681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.735741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.735751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.735832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.735891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.735901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.735969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.736042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.736052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.736122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.736190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.736206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.736356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.736491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.736502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.736655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.736738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.966 [2024-05-15 08:40:57.736748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.966 qpair failed and we were unable to recover it. 00:29:10.966 [2024-05-15 08:40:57.736890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.967 [2024-05-15 08:40:57.736954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.967 [2024-05-15 08:40:57.736964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.967 qpair failed and we were unable to recover it. 00:29:10.967 [2024-05-15 08:40:57.737103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.967 [2024-05-15 08:40:57.737243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.967 [2024-05-15 08:40:57.737253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.967 qpair failed and we were unable to recover it. 00:29:10.967 [2024-05-15 08:40:57.737318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.967 [2024-05-15 08:40:57.737454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.967 [2024-05-15 08:40:57.737464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.967 qpair failed and we were unable to recover it. 00:29:10.967 [2024-05-15 08:40:57.737686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.967 [2024-05-15 08:40:57.737750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.737759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.737830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.737890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.737901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.738061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.738133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.738142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.738211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.738279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.738289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.738433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.738592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.738602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.738804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.738928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.738944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.739037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.739116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.739129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.739251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.739336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.739350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.739438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.739588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.739602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.739675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.739747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.739760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.739898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.739989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.740002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.740086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.740177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.740191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.740269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.740409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.740423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.740501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.740569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.740583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.740729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.740802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.740815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.740895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.740968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.740980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.741149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.741222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.741236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.741388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.741464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.741490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.741580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.741653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.741663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.741722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.741790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.741800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.968 [2024-05-15 08:40:57.741931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.742019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.968 [2024-05-15 08:40:57.742029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.968 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.742118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.742190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.742200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.742279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.742425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.742434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.742601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.742727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.742737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.742867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.742946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.742955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.743026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.743090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.743100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.743171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.743256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.743266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.743340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.743414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.743424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.743554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.743699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.743709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.743770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.743897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.743907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.743973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.744032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.744042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.744106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.744163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.744177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.744305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.744362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.744372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.744460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.744533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.744543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.744611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.744676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.744686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.744751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.744823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.744833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.744897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.745030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.745040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.745104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.745192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.745203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.745375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.745500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.745510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.745587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.745671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.745680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.745751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.745895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.745904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.745964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.746105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.746115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.746206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.746273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.746283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.746421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.746495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.746505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.746564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.746652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.746662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.746726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.746795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.746806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.746865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.747004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.747014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.747089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.747218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.747228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.969 qpair failed and we were unable to recover it. 00:29:10.969 [2024-05-15 08:40:57.747293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.969 [2024-05-15 08:40:57.747366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.747375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.747434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.747495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.747505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.747590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.747649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.747658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.747717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.747877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.747886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.747974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.748099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.748109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.748237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.748312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.748322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.748462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.748532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.748541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.748687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.748757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.748767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.748836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.748907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.748916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.748985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.749055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.749067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.749124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.749278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.749288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.749350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.749415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.749425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.749624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.749679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.749689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.749816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.749873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.749882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.749961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.750041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.750051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.750116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.750198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.750208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.750265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.750342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.750352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.750432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.750493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.750503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.750563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.750618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.750628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.750751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.750907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.750919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.750984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.751044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.751054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.751115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.751175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.751185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.751330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.751415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.751424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.751561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.751687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.751697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.751775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.751838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.751848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.751907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.751965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.751974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.752042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.752120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.752130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.752206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.752269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.752279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.752344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.752428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.752437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.752570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.752627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.752638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.752700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.752855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.970 [2024-05-15 08:40:57.752864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.970 qpair failed and we were unable to recover it. 00:29:10.970 [2024-05-15 08:40:57.752929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.753001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.753010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.753082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.753139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.753149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.753284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.753344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.753354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.753433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.753512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.753521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.753586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.753648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.753657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.753719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.753797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.753807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.753876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.754016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.754026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.754092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.754158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.754172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.754330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.754391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.754404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.754471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.754532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.754542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.754617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.754688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.754698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.754759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.754892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.754902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.754961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.755050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.755060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.755143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.755224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.755234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.755294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.755356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.755366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.755498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.755553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.755563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.755636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.755696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.755705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.755775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.755837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.755846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.755929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.756007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.756016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.756096] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:29:10.971 [2024-05-15 08:40:57.756135] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.971 [2024-05-15 08:40:57.756182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.756251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.756260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.756336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.756400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.756408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.756477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.756601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.756611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.756664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.756723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.756733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.756813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.756881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.756890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.756949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.757008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.757017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.757080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.757148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.757157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.757227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.757286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.757295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.757364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.757432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.757441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.757580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.757706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.757716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-05-15 08:40:57.757776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.757836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-05-15 08:40:57.757846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.757920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.757980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.757990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.758053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.758107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.758116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.758194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.758259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.758269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.758328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.758393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.758403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.758477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.758532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.758542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.758605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.761221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.761233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.761379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.761484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.761494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.761561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.761753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.761763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.761923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.762060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.762070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.762218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.762358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.762368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.762454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.762515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.762525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.762662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.762722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.762732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.762813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.762904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.762913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.762974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.763048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.763058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.763129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.763205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.763214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.763273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.763338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.763347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.763416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.763481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.763491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.763564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.763621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.763631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.763718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.763773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.763783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.763849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.763904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.763914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.764039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.764101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.764110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.764177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.764247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.764257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.764334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.764409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.764418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.764475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.764555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.764564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.764628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.764689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.764699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-05-15 08:40:57.764857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.764928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-05-15 08:40:57.764938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.765002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.765123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.765133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.765259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.765331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.765341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.765397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.765468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.765478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.765537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.765661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.765671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.765828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.765892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.765902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.765981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.766127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.766136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.766262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.766319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.766329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.766397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.766508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.766518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.766584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.766645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.766654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.766804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.766884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.766894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.767019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.767075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.767084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.767217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.767283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.767293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.767436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.767507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.767521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.767606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.767681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.767695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.767775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.767860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.767874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.768015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.768158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.768176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.768365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.768437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.768450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.768516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.768598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.768611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.768684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.768770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.768783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.768869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.768928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.768941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.769026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.769087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.769099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.769172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.769263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.769276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.769346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.769424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.769438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.769582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.769647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.769660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.769731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.769795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.769809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.769941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.770080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.770093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.770229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.770431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.770444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.770528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.770600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.770613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.770688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.770836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.770849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.770919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.770999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.771012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.771099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.771185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.771199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-05-15 08:40:57.771271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-05-15 08:40:57.771358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.771370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.771524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.771606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.771622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.771709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.771861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.771874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.771945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.772015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.772029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.772095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.772190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.772204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.772371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.772538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.772551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.772633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.772700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.772713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.772792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.772864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.772877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.772961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.773037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.773050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.773137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.773217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.773230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.773308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.773389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.773402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.773473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.773548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.773561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.773654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.773723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.773736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.773812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.773902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.773915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.773992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.774060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.774073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.774160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.774234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.774247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.774317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.774398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.774411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.774483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.774542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.774555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.774640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.774713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.774726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.774800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.774884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.774897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.774983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.775052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.775065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.775200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.775268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.775281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.775352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.775414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.775427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.775493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.775569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.775582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.775720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.775784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.775799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.775864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.775951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.775964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.776101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.776195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.776209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.776278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.776350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.776363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.776437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.776569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.776582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.776647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.776712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.776725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.776793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.776857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.776870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.776939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.777030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.777043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.777123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.777191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.777201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-05-15 08:40:57.777269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-05-15 08:40:57.777396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.777405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.777484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.777618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.777627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.777685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.777772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.777783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.777843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.777925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.777935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.778009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.778069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.778078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.778147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.778215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.778225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.778293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.778419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.778428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.778507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.778571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.778581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.778649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.778708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.778718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.778779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.778837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.778847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.778912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.778972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.778982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.779042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.779146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.779156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.779302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.779360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.779369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.779446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.779514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.779524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.779580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.779643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.779653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.779729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.779798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.779807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.779934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.779998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.780008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.780069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.780132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.780142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.780210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.780267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.780277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.780360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.780434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.780443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.780585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.780649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.780659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.780730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.780855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.780865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.780947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.781019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.781029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.781109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.781236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.781246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.781316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.781375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.781384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.781440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.781497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.781507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.781657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.781717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.781727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.781792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.781863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.781873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.781937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.782001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.782011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-05-15 08:40:57.782071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-05-15 08:40:57.782128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.782137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.782205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.782266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.782275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.782331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.782394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.782415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.782486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.782606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.782615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.782676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.782728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.782738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.782864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.782925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.782934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.782995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.783050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.783060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.783120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.783177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.783188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.783251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.783331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.783340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.783428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.783490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.783500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.783578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.783636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.783645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.783707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.783827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.783837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.783922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.783985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.783994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.784060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.784116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.784125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.784192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.784252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.784261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.784336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.784404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.784414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.784581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.784652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.784661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.784716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.784839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.784848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.784916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.784997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.785007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.785136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.785198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.785208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.785268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.785326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.785335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.785397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.785461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.785470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.785533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.785601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.785610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.785669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.785736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.785745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.785823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.785878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.785888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.785947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.786009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.786019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.786077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.786148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.786157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.786231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.786294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-05-15 08:40:57.786304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-05-15 08:40:57.786365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.786517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.786527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.786655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.786796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.786806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.786874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.786941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.786951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.787019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.787076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.787085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.787213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.787268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.787277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.787340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.787400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.787410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.787537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.787662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.787672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.787759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.787814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.787824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.787878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.787939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.787950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.788022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.788085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.788094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.788151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.788233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.788243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.788312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.788436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.788445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.788503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.788572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.788582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.788643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.788711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.788722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.788779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.788848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.788858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.788920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.788995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.789005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.789084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.789221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.789231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.789286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.789344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.789353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.789422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.789573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.789583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.789645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.789716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.789726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.789794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.789867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.789879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.789940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.790011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.790021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.790077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.790134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.790146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.790283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.790342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.790352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.790427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.790494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.790503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.790567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.790620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.790630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.790692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.790752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.790761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.790886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.790940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.790950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.791009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.791071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-05-15 08:40:57.791080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-05-15 08:40:57.791151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.791224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.791234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.791373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.791426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.791436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.791509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.791568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.791577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.791650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.791792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.791804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.791875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.791947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.791956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.792020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.792086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.792096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.792263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.792324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.792334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.792399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.792456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.792466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.792592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.792650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.792659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.792782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.792852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.792862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.792925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.792985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.792995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.793121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.793191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.793201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.793265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.793321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.793330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.793390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.793538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.793549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.793617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.793671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.793681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.793755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.793814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.793826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.793887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.793945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.793954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.794033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.794114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.794124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.794194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.794382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.794392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.794460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.794527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.794537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.794594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.794672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.794682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.794742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.794802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.794812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.794876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.794942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.794952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.795020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.795153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.795163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.795229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.795283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.795292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.795373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.795446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.795456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.795514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.795574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.795583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.795636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.795699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.795708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.795792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.795853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.795862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.795918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.795982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.795992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.796055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.796116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.796125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.796189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.796251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.796262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.796318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.796395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.796405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-05-15 08:40:57.796490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-05-15 08:40:57.796546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.796556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.796618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.796686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.796696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.796756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.796811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.796820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.796902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.797029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.797038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.797115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.797238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.797248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.797315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.797454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.797464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.797540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.797663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.797672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.797734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.797860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.797870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.797955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.798109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.798119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.798191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.798244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.798254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.798379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.798448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.798458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.798534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.798594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.798603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.798677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.798733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.798743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.798817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.798939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.798948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.799025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.799091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.799100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.799226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.799310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.799319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.799385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.799447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.799457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.799531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.799589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.799598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.799681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.799740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.799750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.799819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.799877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.799886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.799950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.800028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.800037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.800103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.800233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.800243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.800307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.800370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.800380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.800445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.800518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.800527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.800588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.800655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.800665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.800754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.800808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.800819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.800889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.800948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.800958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.801017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.801074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.801083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.801153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.801225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.801235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.801364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.801422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.801431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.801489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.801614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.801623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.801687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.801751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.801760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-05-15 08:40:57.801818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.801872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-05-15 08:40:57.801882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.801945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.802011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.802020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.802084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.802212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.802222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.802279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.802333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.802342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.802478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.802536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.802546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.802611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.802678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.802687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.802767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.802826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.802836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.803028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.803088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.803097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.803176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.803237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.803247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.803336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.803398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.803407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.803474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.803550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.803560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.803619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.803679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.803688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.803757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.803819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.803829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.803908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.804025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.804035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.804169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.804295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.804304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.804367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.804490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.804500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.804586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.804650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.804660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.804724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.804780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.804790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.804859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.804993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.805003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.805082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.805140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.805150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.805216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.805276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.805286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.805341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.805398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.805408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.805467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.805529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.805538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.805621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.805676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.805686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.805756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.805812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.805822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.805908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.805971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.805981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.806055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.806117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.806126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.806192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.806251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.806260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.806325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.806390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.806399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.806491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.806547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.806556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.806616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.806693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.806702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-05-15 08:40:57.806771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-05-15 08:40:57.806845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.806855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.806915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.806979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.806989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.807051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.807135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.807145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.807227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.807291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.807300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.807363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.807430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.807440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.807501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.807557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.807566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.807628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.807685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.807695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.807824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.807889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.807900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.808025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.808082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.808092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.808156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.808218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.808228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.808311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.808371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.808381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.808442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.808519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.808529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.808667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.808733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.808743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.808871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.808944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.808954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.809079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.809134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.809143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.809210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.809332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.809342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.809408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.809466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.809476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.809544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.809674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.809684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.809753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.809820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.809830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.809912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.809971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.809981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.810045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.810102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.810112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.810181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.810238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.810248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.810305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.810365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.810375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.810452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.810515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.810525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.810658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.810781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.810791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.810850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.810920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.810931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.810991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.811057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.811067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.811138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.811198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.811208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.811277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.811351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.811361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.811426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.811495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.811504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.811563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.811630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.811640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.811765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.811831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.811840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-05-15 08:40:57.811906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.811965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-05-15 08:40:57.811974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.812034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.812095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.812104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.812190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.812249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.812259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.812322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.812372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.812382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.812452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.812520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.812529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.812590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.812649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.812659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.812731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.812792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.812802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.812856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.813048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.813057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.813118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.813193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.813203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.813271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.813332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.813342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.813416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.813481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.813491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.813567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.813629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.813638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.813710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.813764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.813774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.813831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.813896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.813906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.813981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.814047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.814057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.814116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.814175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.814184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.814247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.814320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.814331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.814396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.814457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.814467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.814540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.814603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.814614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.814671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.814725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.814735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.814793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.814928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.814938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.814996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.815067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.815077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.815138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.815221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.815231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.815294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.815352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.815361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.815426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.815488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.815497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.815553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.815624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.815633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.815702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.815765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.815776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.815916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.816047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.816057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.816141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.816273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.816283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.816344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.816401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.816411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.816486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.816558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.816567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.816627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.816705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.816715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.816782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.816852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.816863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.816930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.816988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.816997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.817057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.817128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.817139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.817199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.817262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.817272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.817329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.817392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.817404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-05-15 08:40:57.817463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.817519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-05-15 08:40:57.817529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.817596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.817721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.817731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.817792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.817844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.817854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.817913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.817983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.817993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.818057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.818191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.818201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.818261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.818317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.818327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.818457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.818534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.818543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.818607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.818663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.818673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.818742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.818864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.818875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.818951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.819021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.819032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.819096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.819174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.819184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.819243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.819397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.819406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.819486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.819544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.819553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.819623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.819794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.819804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.819931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.820002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.820012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.820068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.820120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.820129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.820194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.820257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.820268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.820331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.820428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.820438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.820500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.820559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.820569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.820644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.820701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.820710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.820770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.820843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.820853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.820917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.820981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.820991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.821068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.821194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.821205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.821279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.821336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.821345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.821486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.821614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.821624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.821771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.821898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.821908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.821968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.822105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.822114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.822185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.822247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.822257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.822339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.822473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.822483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.822556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.822615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.822625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.822692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.822767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.822777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.822843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.822898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.822907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.822982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.823049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.823058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.823135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.823196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.823206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.823335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.823411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.823422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.823484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.823617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.823627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.823691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.823834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.823845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-05-15 08:40:57.823902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.823961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-05-15 08:40:57.823971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.824039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.824096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.824106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.824184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.824247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.824257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.824318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.824445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.824455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.824516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.824572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.824582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.824642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.824700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.824709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.824770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.824922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.824932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.825008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.825055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.825064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.825138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.825206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.825216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.825273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.825341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.825351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.825405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.825464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.825475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.825529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.825595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.825605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.825665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.825722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.825732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.825808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.825868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.825879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.825939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.826002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.826012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.826082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.826205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.826216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.826274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.826333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.826342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.826396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.826462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.826471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.826533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.826600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.826610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.826688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.826811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.826821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.826885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.826947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.826956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.827019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.827066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:10.984 [2024-05-15 08:40:57.827075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.827086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.827179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.827234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.827244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.827314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.827367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.827377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.827452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.827523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.827532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.827597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.827656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.827665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.827792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.827863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.827873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.827942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.827997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.828007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.828065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.828122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.828132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.828193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.828251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.828261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.828320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.828381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.828391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.828447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.828501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.828511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-05-15 08:40:57.828582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.828649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-05-15 08:40:57.828658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.828734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.828792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.828802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.828941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.829006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.829016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.829079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.829140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.829149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.829223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.829280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.829290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.829368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.829423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.829433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.829504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.829564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.829574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.829642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.829704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.829713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.829774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.829833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.829843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.829900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.830039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.830049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.830121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.830193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.830203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.830264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.830333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.830342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.830399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.830528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.830538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.830602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.830672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.830682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.830818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.830879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.830890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.830976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.831116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.831126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.831193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.831250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.831261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.831320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.831379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.831389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.831511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.831573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.831583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.831757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.831814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.831824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.831888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.831961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.831971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.832031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.832153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.832168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.832235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.832369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.832380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.832441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.832568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.832582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.832643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.832712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.832722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.832796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.832853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.832863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.832924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.833004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.833014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.833080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.833139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.833150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.833214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.833278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.833287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.833369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.833416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.833426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.833501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.833559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.833569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.833633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.833689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.833699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.833760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.833833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.833843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.833918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.834056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.834067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.834149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.834220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.834231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.834308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.834384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.834394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.834460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.834525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.834535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.834674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.834749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.834759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.834821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.834876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.834886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.834965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.835023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.835033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-05-15 08:40:57.835099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-05-15 08:40:57.835173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.835183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.835266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.835329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.835339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.835420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.835553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.835564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.835656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.835787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.835797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.835872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.836004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.836014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.836083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.836140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.836150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.836216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.836276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.836286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.836362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.836426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.836437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.836503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.836628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.836639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.836796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.836860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.836870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.836954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.837082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.837091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.837221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.837276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.837287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.837433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.837502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.837512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.837588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.837664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.837673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.837740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.837800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.837809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.837893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.837962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.837971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.838104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.838177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.838188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.838246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.838311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.838321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.838392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.838450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.838459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.838531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.838590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.838600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.838666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.838729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.838739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.838801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.838858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.838868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.838923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.838992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.839001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.839064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.839136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.839146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.839256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.839320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.839331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.839474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.839534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.839543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.839606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.839738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.839748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.839820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.839878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.839887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.840038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.840093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.840102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.840169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.840245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.840255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.840494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.840567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.840577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.840700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.840840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.840850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.840906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.840972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.840982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.841038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.841109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.841119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.841190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.841258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.841268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.841325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.841397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.841407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.841468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.841548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.841558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.841624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.841679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.841688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.841747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.841809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.841818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-05-15 08:40:57.841879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-05-15 08:40:57.842003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.842013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.842074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.842208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.842219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.842288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.842345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.842356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.842447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.842511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.842520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.842585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.842645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.842654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.842720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.842793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.842802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.842870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.842932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.842942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.842998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.843066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.843076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.843140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.843201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.843211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.843275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.843344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.843354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.843478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.843538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.843548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.843604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.843660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.843670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.843740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.843802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.843812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.843868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.844001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.844011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.844073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.844135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.844145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.844283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.844348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.844359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.844418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.844476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.844487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.844627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.844685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.844695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.844775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.844839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.844850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.844905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.844960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.844969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.845026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.845152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.845162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.845232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.845293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.845302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.845362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.845423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.845437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.845494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.845547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.845557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.845617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.845741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.845751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.845814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.845870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.845880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.845945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.846012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.846022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.846188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.846245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.846255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.846313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.846384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.846395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.846455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.846516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.846527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.846597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.846656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.846666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.846727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.846788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.846797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.846853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.846921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.846933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.847000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.847076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.847087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.847212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.847274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.847284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.847350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.847484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.847495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.847649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.847706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.847716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-05-15 08:40:57.847780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.847838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-05-15 08:40:57.847848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.847919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.847975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.847985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.848204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.848266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.848277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.848410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.848537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.848547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.848604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.848724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.848734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.848862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.848945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.848959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.849043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.849129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.849139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.849213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.849286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.849295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.849364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.849429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.849439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.849509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.849579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.849589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.849665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.849824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.849834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.849903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.849961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.849971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.850042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.850106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.850117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.850181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.850305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.850316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.850382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.850444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.850454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.850647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.850701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.850714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.850789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.850917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.850928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.850988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.851062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.851072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.851129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.851202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.851213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.851293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.851351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.851360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.851423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.851492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.851502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.851568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.851699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.851709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.851776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.851833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.851843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.851905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.851978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.851988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.852048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.852125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.852136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.852272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.852338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.852348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.852410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.852545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.852555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.852627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.852689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.852698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.852759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.852819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.852829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.852957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.853025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.853035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.853108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.853182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.853193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.853253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.853311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.853322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.853533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.853590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.853601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.853659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.853726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.853736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.853799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.853932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.853942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.854014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.854083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.854093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-05-15 08:40:57.854157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.854228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-05-15 08:40:57.854238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.854300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.854380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.854389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.854526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.854587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.854597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.854656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.854724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.854734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.854869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.854955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.854964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.855023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.855115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.855125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.855197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.855327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.855337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.855507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.855633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.855643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.855780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.855917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.855928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.855989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.856049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.856058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.856198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.856269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.856279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.856438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.856509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.856519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.856650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.856709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.856719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.856793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.856848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.856858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.856941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.857014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.857024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.857214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.857342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.857352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.857428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.857500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.857510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.857569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.857699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.857709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.857776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.857834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.857844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.857905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.857957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.857968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.858038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.858102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.858112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.858177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.858234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.858244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.858313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.858377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.858387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.858444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.858504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.858514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.858635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.858699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.858709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.858777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.858828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.858838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.858987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.859057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.859066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.859127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.859193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.859210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.859272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.859333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.859343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.859475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.859549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.859559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.859622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.859678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.859689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.859752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.859809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.859820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.859957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.860011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.860021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.860090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.860215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.860225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.860297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.860357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.860367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.860430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.860490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.860500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.860624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.860702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.860713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.860777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.860862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.860872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.860999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.861054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.861064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-05-15 08:40:57.861142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.861202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-05-15 08:40:57.861212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.861270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.861323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.861333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.861396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.861472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.861482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.861540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.861598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.861608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.861662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.861739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.861749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.861812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.861884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.861894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.861956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.862014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.862024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.862099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.862154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.862168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.862236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.862306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.862316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.862376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.862435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.862444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.862505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.862634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.862644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.862713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.862861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.862874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.862933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.863057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.863069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.863204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.863270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.863282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.863358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.863424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.863434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.863561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.863623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.863634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.863698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.863778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.863788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.863860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.863919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.863929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.864007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.864080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.864090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.864151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.864293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.864306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.864370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.864429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.864439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.864545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.864701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.864717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.864807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.864886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.864901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.864977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.865055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.865069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.865135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.865221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.865235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.865300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.865496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.865510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.865588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.865661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.865675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.865739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.865876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.865889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.865957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.866027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.866042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.866108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.866176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.866190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.866326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.866418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.866432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.866563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.866654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.866668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.866736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.866805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.866819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.866954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.867083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.867097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.867176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.867256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.867269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.867414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.867490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.867503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.867573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.867646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.867659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-05-15 08:40:57.867727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.867800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-05-15 08:40:57.867813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.867920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.868054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.868068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.868157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.868229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.868243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.868388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.868456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.868470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.868572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.868653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.868670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.868762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.868916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.868929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.869118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.869184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.869198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.869339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.869406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.869420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.869551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.869693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.869703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.869773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.869833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.869843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.869916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.870049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.870059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.870126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.870197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.870208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.870268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.870412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.870423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.870501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.870575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.870584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.870655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.870796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.870809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.870869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.870929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.870939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.871013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.871095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.871105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.871259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.871319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.871330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.871416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.871473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.871483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.871550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.871614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.871624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.871682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.871751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.871760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.871836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.871893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.871902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.871978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.872040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.872050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.872178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.872240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.872251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.872316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.872377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.872387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.872514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.872586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.872596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.872663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.872733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.872743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.872798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.872862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.872872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.872944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.873001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.873010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.873069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.873137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.873147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.873237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.873312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.873323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.873411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.873482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.873492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.873566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.873625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.873635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.873698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.873765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.873774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-05-15 08:40:57.873834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-05-15 08:40:57.873908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.873918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.873995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.874069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.874080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.874150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.874216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.874226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.874288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.874417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.874428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.874493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.874564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.874575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.874651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.874707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.874716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.874779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.874941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.874951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.875016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.875155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.875168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.875296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.875363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.875373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.875440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.875509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.875519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.875648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.875713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.875723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.875798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.875877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.875886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.875958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.876095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.876105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.876262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.876322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.876333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.876460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.876548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.876558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.876619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.876748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.876758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.876834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.876897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.876906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.876970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.877054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.877064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.877122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.877193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.877204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.877265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.877339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.877348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.877402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.877546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.877556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.877617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.877677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.877687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.877775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.877850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.877859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.877919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.877980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.877990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.878072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.878208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.878218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.878272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.878351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.878362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.878437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.878499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.878509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.878652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.878718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.878727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.878855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.878993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.879003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.879077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.879150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.879160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.879226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.879299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.879309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.879381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.879514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.879524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.879597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.879653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.879663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.879727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.879789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.879799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.879936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.879999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.880009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.880142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.880205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.880216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.880274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.880348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.880358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.880417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.880476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.880486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.880541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.880609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.880619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.880693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.880772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.880782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.880915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.881041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.881052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.881184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.881254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.881264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-05-15 08:40:57.881325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.881447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-05-15 08:40:57.881457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.881573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.881710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.881720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.881910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.881995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.882005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.882103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.882161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.882175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.882254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.882327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.882336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.882407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.882470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.882479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.882540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.882599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.882608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.882736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.882798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.882808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.882877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.882929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.882939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.882999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.883054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.883063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.883128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.883200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.883211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.883351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.883487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.883497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.883652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.883717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.883727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.883794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.883859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.883869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.883930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.883997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.884006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.884082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.884142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.884151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.884220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.884278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.884287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.884345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.884403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.884412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.884537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.884666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.884675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.884735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.884805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.884814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.884941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.885009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.885019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.885079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.885136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.885146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.885215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.885278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.885287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.885347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.885406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.885416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.885485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.885615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.885624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.885764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.885831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.885842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.885908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.886033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.886043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.886121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.886182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.886192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.886315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.886441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.886451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.886520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.886600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.886612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.886810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.886877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.886887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.886944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.887065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.887075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.887148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.887305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.887317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.887445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.887568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.887578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.887652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.887706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.887716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.887922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.888045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.888055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.888117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.888188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.888198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.888273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.888340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.888349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-05-15 08:40:57.888413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.888482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-05-15 08:40:57.888492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.888555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.888614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.888623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.888750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.888820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.888830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.888903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.889036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.889046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.889188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.889313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.889323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.889380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.889432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.889442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.889578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.889704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.889714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.889843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.889911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.889921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.889977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.890179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.890189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.890261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.890318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.890328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.890476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.890618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.890628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.890701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.890762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.890772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.890858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.890992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.891001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.891072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.891197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.891208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.891272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.891344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.891354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.891494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.891559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.891569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.891710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.891767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.891777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.891907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.891979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.891989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.892072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.892206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.892217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.892383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.892445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.892455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.892590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.892658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.892668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.892726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.892811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.892821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.892909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.893037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.893047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.893117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.893247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.893257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.893323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.893391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.893401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.893477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.893618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.893628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.893762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.893821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.893831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.893972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.894046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.894056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.894133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.894218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.894228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.894283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.894339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.894349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.894419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.894482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.894492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.894550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.894631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.894643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.894720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.894789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.894799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.894864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.894927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.894936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.895068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.895139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.895148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.895209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.895348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.895358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.895429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.895488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.895498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.895557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.895623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.895633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.895692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.895751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-05-15 08:40:57.895760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-05-15 08:40:57.895820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.895873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.895882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.895963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.896020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.896030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.896094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.896153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.896170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.896235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.896296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.896305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.896365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.896421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.896431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.896493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.896549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.896558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.896617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.896673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.896683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.896755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.896815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.896825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.897000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.897056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.897065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.897136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.897193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.897204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.897279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.897347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.897358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.897417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.897476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.897488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.897615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.897677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.897689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.897746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.897814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.897824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.897902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.897952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.897961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.898019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.898077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.898087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.898218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.898280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.898289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.898357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.898415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.898425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.898492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.898619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.898629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.898689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.898752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.898762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.898819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.898874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.898883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.898957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.899082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.899092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.899157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.899234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.899247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.899311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.899383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.899393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.899459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.899601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.899613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.899761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.899831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.899841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.899904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.899972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.899982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.900121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.900201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.900212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.900347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.900471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.900482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.900549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.900607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.900616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.900679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.900747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.900757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.900843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.900901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.900911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.900953] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.995 [2024-05-15 08:40:57.900979] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.995 [2024-05-15 08:40:57.900988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.900990] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.995 [2024-05-15 08:40:57.900998] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.995 [2024-05-15 08:40:57.901003] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.995 [2024-05-15 08:40:57.901070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.901080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.901142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.901203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.901213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.901112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:10.995 [2024-05-15 08:40:57.901272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.901221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:10.995 [2024-05-15 08:40:57.901327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:10.995 [2024-05-15 08:40:57.901345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.901355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.901329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:10.995 [2024-05-15 08:40:57.901564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.901625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.901635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-05-15 08:40:57.901777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.901901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-05-15 08:40:57.901911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.901981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.902078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.902088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.902145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.902238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.902248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.902312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.902439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.902449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.902644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.902712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.902722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.902897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.902987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.902997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.903069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.903132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.903141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.903277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.903353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.903362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.903419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.903482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.903492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.903549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.903608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.903618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.903749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.903807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.903817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.903876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.903938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.903947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.904015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.904085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.904095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.904174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.904298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.904308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.904373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.904428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.904438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.904502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.904567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.904576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.904632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.904695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.904705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.904768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.904826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.904836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.904899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.904967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.904977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.905036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.905091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.905101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.905160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.905311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.905321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.905376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.905442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.905452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.905583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.905656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.905666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.905738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.905819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.905828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.905913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.906044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.906056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.906187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.906269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.906278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.906353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.906434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.906445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.906563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.906664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.906674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.906746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.906828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.906837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.906980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.907066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.907076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.907150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.907282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.907293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.907360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.907480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.907489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.907615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.907671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.907680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.907811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.907880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.907889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.907956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.908083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.908095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.908163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.908228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.908238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.908307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.908377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.908386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.908452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.908524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.908534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.908598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.908722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.908732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.908792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.908873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.908883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.908933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.909009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.909019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.909080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.909151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.909161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.909254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.909328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.909338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.909412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.909471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.909481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.909547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.909609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.909621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.909754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.909827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.909837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-05-15 08:40:57.909913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-05-15 08:40:57.910104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.910114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.910176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.910239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.910249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.910321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.910376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.910386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.910556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.910678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.910688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.910806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.910862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.910871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.910954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.911024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.911034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.911114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.911182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.911193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.911247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.911307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.911317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.911528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.911653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.911665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.911731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.911799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.911809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.911878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.911951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.911961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.912038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.912102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.912111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.912178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.912254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.912264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.912429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.912487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.912496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.912556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.912634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.912643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.912714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.912782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.912792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.912860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.912958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.912969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.913101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.913223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.913234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.913311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.913385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.913395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.913457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.913528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.913538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.913611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.913670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.913680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.913744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.913872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.913882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.914010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.914073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.914083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.914153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.914213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.914223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.914294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.914354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.914364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.914489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.914548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.914557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.914633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.914769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.914778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.914843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.914964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.914974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.915037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.915093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.915103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.915173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.915230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.915239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.915377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.915444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.915454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.915523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.915580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.915590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.915648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.915715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.915725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.915786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.915855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.915865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.915932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.915988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.915998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.916072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.916129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.916138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.916318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.916445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.916456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.916533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.916654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.916664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.916934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.917078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.917089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.917149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.917375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.997 [2024-05-15 08:40:57.917388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.997 qpair failed and we were unable to recover it. 00:29:10.997 [2024-05-15 08:40:57.917476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.917547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.917557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.917703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.917781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.917790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.917939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.918111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.918122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.918270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.918430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.918442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.918517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.918591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.918601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.918750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.918884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.918894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.918963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.919047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.919057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.919261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.919335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.919344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.919429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.919511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.919522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.919669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.919809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.919820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.919942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.920015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.920024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.920216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.920362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.920372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.920458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.920526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.920536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.920676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.920883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.920894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.920980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.921121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.921132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.921368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.921441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.921451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.921532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.921601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.921610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.921760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.921896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.921906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.922105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.922190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.922200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.922288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.922421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.922431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.922578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.922720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.922730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.922929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.923106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.923117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.923179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.923250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.923260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.923482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.923618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.923627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.923704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.923759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.923769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.998 qpair failed and we were unable to recover it. 00:29:10.998 [2024-05-15 08:40:57.923987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.924062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-05-15 08:40:57.924072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.924233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.924306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.924316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.924489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.924571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.924581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.924673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.924754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.924764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.924910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.925133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.925143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.925221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.925371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.925381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.925465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.925542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.925551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.925699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.925919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.925930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.926003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.926152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.926162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.926390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.926551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.926560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.926638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.926815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.926825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.926951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.927090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.927100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.927321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.927410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.927419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.927544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.927634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.927644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.927738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.927916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.927931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.928200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.928315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.928330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.928494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.928691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.928705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.928939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.929119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.929132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.929288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.929442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.929455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.929627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.929768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.929782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.929928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.930174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.930190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.930338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.930439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.930452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.930613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.930814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.930828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.931067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.931283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.931296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.931541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.931715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.931728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.931813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.932032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.932046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.932211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.932378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.932391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.932604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.932705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.932718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:10.999 qpair failed and we were unable to recover it. 00:29:10.999 [2024-05-15 08:40:57.932885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.932957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-05-15 08:40:57.932970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.933102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.933236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.933250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.933405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.933493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.933506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.933626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.933707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.933720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.933881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.934029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.934043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.934282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.934452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.934465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.934550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.934775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.934788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.935024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.935172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.935186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.935393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.935483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.935496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.935717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.935873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.935887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.936099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.936308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.936322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.936519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.936720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.936734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.936922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.937141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.937155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.937377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.937465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.937479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.937574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.937676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.937689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.937775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.937935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.937948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.938109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.938275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.938289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.938375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.938508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.938521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.938667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.938738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.938751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.938908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.939069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.939083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.939176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.939302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.939315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.939451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.939599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.939613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.939729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.939998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.940011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.940235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.940376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.940390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.940623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.940724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.940738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.940922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.941012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.941025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.941188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.941279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.941293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.941500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.941653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.941667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.000 qpair failed and we were unable to recover it. 00:29:11.000 [2024-05-15 08:40:57.941831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-05-15 08:40:57.942038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.942051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.942275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.942367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.942382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.942492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.942638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.942651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.942895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.942971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.942985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.943195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.943344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.943357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.943430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.943601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.943615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.943760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.943856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.943870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.944040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.944281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.944296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.944391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.944487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.944503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.944639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.944785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.944800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.944896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.945055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.945069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.945231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.945298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.945312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.945416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.945489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.945502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.945598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.945665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.945679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.945876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.946017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.946032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.946231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.946430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.946445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.946606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.946690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.946704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.947026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.947130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.947143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.947308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.947403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.947424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.947512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.947601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.947615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.947694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.947793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.947807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.947960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.948041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.948055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.948205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.948395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.948411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.948504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.948597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.948611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.948745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.948826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.948841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.948919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.949060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.949074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.949230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.949334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.949348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.949427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.949502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.949515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.001 qpair failed and we were unable to recover it. 00:29:11.001 [2024-05-15 08:40:57.949684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.001 [2024-05-15 08:40:57.949754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.949771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.002 qpair failed and we were unable to recover it. 00:29:11.002 [2024-05-15 08:40:57.949920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.950119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.950132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.002 qpair failed and we were unable to recover it. 00:29:11.002 [2024-05-15 08:40:57.950281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.950524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.950539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.002 qpair failed and we were unable to recover it. 00:29:11.002 [2024-05-15 08:40:57.950687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.950853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.950867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.002 qpair failed and we were unable to recover it. 00:29:11.002 [2024-05-15 08:40:57.951010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.951233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.951248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.002 qpair failed and we were unable to recover it. 00:29:11.002 [2024-05-15 08:40:57.951351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.951498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.951511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.002 qpair failed and we were unable to recover it. 00:29:11.002 [2024-05-15 08:40:57.951648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.951847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.951862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.002 qpair failed and we were unable to recover it. 00:29:11.002 [2024-05-15 08:40:57.952063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.952254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.952268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.002 qpair failed and we were unable to recover it. 00:29:11.002 [2024-05-15 08:40:57.952432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.952627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.952641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.002 qpair failed and we were unable to recover it. 00:29:11.002 [2024-05-15 08:40:57.952825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.952989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.953002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.002 qpair failed and we were unable to recover it. 00:29:11.002 [2024-05-15 08:40:57.953200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.953284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.953301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.002 qpair failed and we were unable to recover it. 00:29:11.002 [2024-05-15 08:40:57.953444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.953531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.953545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.002 qpair failed and we were unable to recover it. 00:29:11.002 [2024-05-15 08:40:57.953755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.953909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.953922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.002 qpair failed and we were unable to recover it. 00:29:11.002 [2024-05-15 08:40:57.954148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.954306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.954321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.002 qpair failed and we were unable to recover it. 00:29:11.002 [2024-05-15 08:40:57.954479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.954631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.954644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.002 qpair failed and we were unable to recover it. 00:29:11.002 [2024-05-15 08:40:57.954790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.002 [2024-05-15 08:40:57.954947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.954961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.955211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.955357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.955371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.955613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.955763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.955776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.955935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.956022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.956035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.956286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.956369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.956381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.956514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.956657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.956670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.956816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.956895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.956908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.956996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.957084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.957097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.957300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.957379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.957393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.957527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.957604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.957616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.957753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.957948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.957962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.958104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.958267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.958280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.958503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.958655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.958668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.958760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.959020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.959034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.959186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.959327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.959342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.959499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.959696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.959709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.959986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.960168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.960182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.960341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.960439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.960452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.960594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.960741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.960754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.960850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.961061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.961076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.961219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.961318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.961331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.961501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.961588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.961601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.961773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.961941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.961954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.962101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.962300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.962314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-05-15 08:40:57.962465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-05-15 08:40:57.962564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.962577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.962848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.963038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.963052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.963139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.963310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.963325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.963550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.963715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.963729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.963899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.964063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.964077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.964206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.964298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.964312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.964453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.964613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.964627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.964791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.964875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.964889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.965036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.965121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.965134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.965367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.965512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.965525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.965630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.965712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.965725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.965798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.965964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.965977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.966111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.966256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.966269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.966427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.966511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.966524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.966667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.966856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.966869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.967059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.967282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.967296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.967390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.967532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.967545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.967694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.967783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.967796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.968010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.968161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.968179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.968273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.968355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.968368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.968472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.968615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.968629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.968730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.968896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.968910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.969113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.969271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.969286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.969441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.969572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.969585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.969686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.969763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.969775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.969985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.970235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.970249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.970349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.970500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.970513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.970593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.970672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.970685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-05-15 08:40:57.970974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-05-15 08:40:57.971144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.971157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.971308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.971506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.971519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.971612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.971691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.971704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.971889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.972085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.972099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.972197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.972290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.972305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.972388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.972482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.972496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.972591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.972683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.972695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.972786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.973096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.973110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.973252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.973385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.973398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.973560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.973754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.973767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.973926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.974065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.974079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.974309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.974460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.974474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.974607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.974689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.974702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.974888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.974980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.974993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86bc000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.975195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.975320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.975335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.975494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.975592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.975605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.975708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.975972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.975985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.976124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.976276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.976290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.976382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.976531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.976548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.976685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.976760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.976772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.976907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.977040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.977053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.977152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.977319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.977333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.977429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.977572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.977585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.977677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.977771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.977784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.977952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.978114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.978127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.978290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.978382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.978395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.978639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.978820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.978833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.979058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.979233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.979247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-05-15 08:40:57.979413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-05-15 08:40:57.979544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.979557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.979638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.979740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.979753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.979919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.980059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.980072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.980276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.980473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.980486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.980637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.980864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.980877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.981038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.981264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.981279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.981430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.981529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.981545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.981697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.981790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.981803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.981890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.982023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.982037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.982124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.982303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.982317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.982389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.982533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.982546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.982679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.982767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.982780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.983010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.983097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.983110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.983257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.983358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.983371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.983532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.983613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.983627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.983789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.983948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.983961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.984105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.984182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.984198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.984366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.984459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.984472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.984611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.984708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.984722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.984892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.985045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.985059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.985158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.985268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.985281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.985370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.985445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.985458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.985676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.985869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.985882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.986035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.986170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.986184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.986311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.986454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.986467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.986664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.986902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.986916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.987019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.987272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.987285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.987551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.987694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.987707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-05-15 08:40:57.987915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.988148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-05-15 08:40:57.988161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.988277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.988426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.988439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.988586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.988722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.988735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.988842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.989054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.989067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.989171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.989335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.989348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.989453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.989592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.989605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.989779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.989927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.989940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.990195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.990305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.990318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.990457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.990536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.990549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.990654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.990751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.990765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.990937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.991077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.991091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.991241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.991391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.991404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.991496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.991582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.991595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.991685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.991827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.991840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.991997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.992199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.992214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.992356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.992499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.992512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.992609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.992861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.992874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.993098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.993230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.993244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.993389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.993464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.993477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.993628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.993761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.993774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-05-15 08:40:57.993941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-05-15 08:40:57.994078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-05-15 08:40:57.994092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-05-15 08:40:57.994322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-05-15 08:40:57.994421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-05-15 08:40:57.994434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-05-15 08:40:57.994504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-05-15 08:40:57.994652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-05-15 08:40:57.994666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-05-15 08:40:57.994758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-05-15 08:40:57.994931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-05-15 08:40:57.994944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-05-15 08:40:57.995089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-05-15 08:40:57.995235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-05-15 08:40:57.995249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-05-15 08:40:57.995358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-05-15 08:40:57.995440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-05-15 08:40:57.995453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-05-15 08:40:57.995538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-05-15 08:40:57.995627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-05-15 08:40:57.995640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-05-15 08:40:57.995873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-05-15 08:40:57.996024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-05-15 08:40:57.996038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-05-15 08:40:57.996105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-05-15 08:40:57.996312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-05-15 08:40:57.996326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:57.996461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.996544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.996557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:57.996787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.996876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.996890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:57.997034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.997233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.997247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:57.997321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.997481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.997494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:57.997694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.997852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.997866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:57.997949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.998184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.998198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:57.998363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.998521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.998534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:57.998736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.998908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.998921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:57.999143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.999384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.999399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:57.999483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.999580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.999593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:57.999742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:57.999996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.000013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:58.000215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.000393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.000406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:58.000602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.000679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.000692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:58.000796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.000926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.000939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:58.001173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.001320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.001333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:58.001437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.001588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.001601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:58.001781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.001924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.001938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:58.002114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.002190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.002203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:58.002291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.002392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.002405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:58.002550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.002721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.002734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:58.002950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.003159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.003176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:58.003355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.003570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.003583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:58.003682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.003836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.003849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:58.004093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.004244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.004259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:58.004399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.004489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.004502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:58.004639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.004817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.004830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:58.004992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.005075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.005088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-05-15 08:40:58.005174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.005331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-05-15 08:40:58.005344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.005588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.005685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.005698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.005863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.006060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.006074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.006219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.006442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.006455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.006623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.006765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.006778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.007014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.007245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.007259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.007423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.007572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.007585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.007730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.007887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.007900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.008100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.008205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.008219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.008417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.008571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.008584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.008754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.008943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.008956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.009157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.009333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.009346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.009517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.009606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.009619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.009860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.010057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.010071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.010219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.010361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.010374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.010594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.010744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.010757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.010856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.010953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.010965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.011054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.011234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.011248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.011448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.011646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.011659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.011848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.012071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.012084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.012289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.012441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.012455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.012532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.012675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.012688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.012783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.012977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.012991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.013229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.013314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.013327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.013483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.013632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.013646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.013790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.014011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.014024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.014179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.014406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.014419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.014497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.014578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.014591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-05-15 08:40:58.014843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.015014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-05-15 08:40:58.015027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.015119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.015354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.015368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.015469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.015617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.015631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.015700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.015779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.015792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.016025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.016159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.016178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.016355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.016449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.016461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.016528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.016677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.016693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.016793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.017043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.017056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.017246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.017347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.017360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.017448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.017595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.017608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.017706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.017860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.017872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.018019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.018163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.018182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.018368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.018518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.018531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.018689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.018853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.018866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.019107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.019244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.019258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.019408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.019496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.019509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.019729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.019927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.019940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.020141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.020224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.020238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.020335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.020486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.020499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.020658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.020807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.020820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.020987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.021076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.021088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.021163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.021277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.021290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.021380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.021629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.021643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.021810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.021977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.021989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.022176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.022325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.022339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.022479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.022566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.022581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.022722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.022861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.022874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.023007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.023156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.023174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.023283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.023432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.023445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-05-15 08:40:58.023533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-05-15 08:40:58.023617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.023630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.023842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.024053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.024066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.024256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.024357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.024370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.024583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.024719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.024732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.024906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.025065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.025078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.025311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.025461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.025474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.025572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.025723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.025737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.025988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.026201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.026214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.026445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.026589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.026606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.026767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.026906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.026919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.027059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.027240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.027254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.027471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.027610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.027624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.027866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.027960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.027973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.028138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.028237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.028251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.028414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.028632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.028646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.028741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.028893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.028906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.029161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.029436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.029449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.029639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.029816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.029829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.030055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.030266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.030280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.030375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.030476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.030489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.030662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.030912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.030926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.031070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.031236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.031249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.031403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.031552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.031565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.031655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.031940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.031953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.032163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.032341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.032355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.032561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.032759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.032772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.032978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.033119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.033132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.033274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.033424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.033437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-05-15 08:40:58.033536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-05-15 08:40:58.033610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.033626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.033797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.033996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.034010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.034170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.034237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.034251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.034334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.034481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.034494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.034650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.034797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.034810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.034965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.035094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.035107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.035306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.035464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.035477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.035580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.035666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.035679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.035820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.035900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.035914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.036147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.036302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.036316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.036403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.036548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.036562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.036707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.036870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.036883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.037053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.037250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.037264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.037416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.037547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.037560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.037704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.037807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.037820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.037952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.038045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.038058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.038269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.038431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.038444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.038580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.038729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.038743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.038849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.039005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.039017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.039156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.039302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.039315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.039411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.039496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.039509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.039710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.039983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.039996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.040190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.040342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.040355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-05-15 08:40:58.040443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.040588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-05-15 08:40:58.040602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.040828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.040995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.041008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.041154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.041313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.041327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.041491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.041668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.041681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.041878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.042085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.042098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.042260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.042413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.042426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.042581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.042674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.042687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.042784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.042926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.042940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.043094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.043186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.043200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.043354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.043508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.043521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.043622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.043890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.043903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.044084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.044160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.044186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.044269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.044418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.044432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.044541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.044677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.044690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.044858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.045021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.045034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.045242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.045351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.045365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.045521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.045671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.045684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.045784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.045936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.045949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.046147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.046304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.046318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.046398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.046486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.046498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.046586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.046779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.046792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.046922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.047092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.047106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.047251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.047387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.047400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.047498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.047585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.047598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.047844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.047988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.048001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.048150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.048228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.048242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.048398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.048475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.048488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.048628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.048895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.048908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.049114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.049293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-05-15 08:40:58.049309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-05-15 08:40:58.049415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.049567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.049581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.049712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.049797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.049810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.049949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.050095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.050108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.050306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.050388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.050402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.050552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.050634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.050647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.050851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.051000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.051013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.051098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.051308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.051321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.051494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.051667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.051680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.051923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.052185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.052199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.052354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.052559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.052575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.052706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.052889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.052902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.052975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.053069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.053082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.053317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.053515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.053528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.053794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.053943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.053956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.054177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.054273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.054286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.054450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.054598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.054611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.054712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.054894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.054907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.055067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.055154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.055173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.055318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.055466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.055479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.055654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.055752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.055766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.056015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.056210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.056224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.056444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.056532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.056544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.056695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.056773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.056786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.056994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.057143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.057157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.057293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.057385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.057398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.057547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.057680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.057693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.057875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.057968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.057982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.058180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.058298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-05-15 08:40:58.058312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-05-15 08:40:58.058455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.058548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.058562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.058711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.058954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.058967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.059050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.059203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.059217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.059388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.059532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.059545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.059688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.059762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.059775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.059952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.060082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.060096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.060185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.060359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.060373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.060534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.060732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.060745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.060982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.061056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.061069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.061223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.061369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.061382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.061538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.061727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.061740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.061989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.062120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.062133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.062264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.062479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.062492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.062581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.062730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.062743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.062994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.063073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.063086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.063230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.063319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.063332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.063431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.063527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.063540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.063757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.063892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.063905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.064180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.064316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.064329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.064532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.064669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.064683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.064762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.064906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.064920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.065142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.065318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.065332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.065397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.065582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.065595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.065802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.065941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.065954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.066192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.066267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.066281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.066361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.066441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.066454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.066589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.066744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.066758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.066970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.067102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.067115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-05-15 08:40:58.067314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-05-15 08:40:58.067467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.067481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.067722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.067876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.067889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.068040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.068245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.068259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.068340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.068438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.068451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.068653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.068807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.068824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.069040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.069106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.069120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.069279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.069387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.069400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.069673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.069872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.069885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.070058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.070132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.070145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.070411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.070561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.070574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.070729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.070805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.070818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.070917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.071133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.071146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.071362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.071445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.071458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.071603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.071826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.071840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.071970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.072043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.072056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.072228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.072374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.072387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.072535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.072682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.072696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.072786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.072927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.072941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.073077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.073168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.073182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.073369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.073552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.073566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.073647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.073818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.073831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.073976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.074194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.074208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.074388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.074477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.074490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.074634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.074708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.074721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.074874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.075011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.075025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.075229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.075358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.075371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.075468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.075634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.075649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.075819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.075898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-05-15 08:40:58.075910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-05-15 08:40:58.075994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.076186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.076200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.076415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.076518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.076531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.076686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.076924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.076937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.077082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.077251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.077265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.077364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.077522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.077535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.077708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.077894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.077907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.078143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.078333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.078347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50c10 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.078518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.078780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.078792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.078881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.078971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.078982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.079189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.079363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.079373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.079591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.079720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.079729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.079894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.080039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.080048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.080265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.080343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.080353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.080506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.080639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.080648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.080740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.080960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.080969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.081115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.081257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.081267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.081444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.081633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.081643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.081769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.081896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.081906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.082087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.082185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.082195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.082346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.082556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.082566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.082645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.082820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.082829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.082967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.083157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.083170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.083320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.083378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.083388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.083610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.083749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.083759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-05-15 08:40:58.084017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-05-15 08:40:58.084234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.084244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.084406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.084481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.084490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.084570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.084787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.084797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.084879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.085091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.085101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.085287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.085494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.085504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.085716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.085929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.085939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.086149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.086221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.086231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.086425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.086659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.086669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.086794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.086934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.086944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.087090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.087299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.087309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.087385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.087528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.087537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.087613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.087695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.087704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.087762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.087978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.087988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.088150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.088305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.088315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.088538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.088695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.088704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.088909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.089109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.089118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.089203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.089355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.089364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.089556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.089634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.089643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.089778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.089926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.089935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.090176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.090336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.090346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.090434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.090653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.090663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.090761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.090882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.090892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.091038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.091172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.091182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.091347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.091542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.091551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.091614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.091736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.091746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.091835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.091963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.091972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.092134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.092323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.092333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.092548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.092680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.092689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-05-15 08:40:58.092851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-05-15 08:40:58.092970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.092980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.093177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.093340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.093350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.093508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.093715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.093725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.093886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.094008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.094018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.094124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.094272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.094282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.094422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.094618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.094630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.094765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.094889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.094899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.095035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.095178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.095188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.095408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.095597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.095606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.095815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.096031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.096041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.096231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.096366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.096376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.096464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.096658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.096668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.096903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.097040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.097050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.097291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.097491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.097500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.097659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.097836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.097846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.097988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.098069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.098080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.098299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.098367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.098376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.098518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.098604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.098614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.098807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.098935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.098944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.099092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.099218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.099227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.099393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.099602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.099612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.099779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.099992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.100002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.100222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.100278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.100288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.100480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.100603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.100612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.100754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.100813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.100823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.101006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.101212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.101224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.101366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.101601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.101610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.101755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.101834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.101843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-05-15 08:40:58.101908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-05-15 08:40:58.102121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.102131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.102217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.102406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.102415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.102585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.102666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.102675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.102902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.103052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.103061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.103246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.103336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.103346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.103490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.103686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.103695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.103866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.104058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.104068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.104220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.104356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.104367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.104503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.104577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.104587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.104803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.105041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.105050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.105263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.105460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.105469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.105602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.105791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.105800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.106001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.106139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.106148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.106285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.106444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.106454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.106642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.106765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.106774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.106983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.107184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.107194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.107412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.107608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.107618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.107755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.107890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.107899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.108115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.108258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.108268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.108339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.108477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.108486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.108622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.108753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.108762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.108898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.109124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.109133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.109321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.109389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.109398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.109593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.109811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.109820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.110030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.110171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.110180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.110266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.110493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.110503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.110721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.110877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.110887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.111118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.111364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.111373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-05-15 08:40:58.111536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-05-15 08:40:58.111669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.111679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.111898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.112109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.112119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.112253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.112348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.112357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.112513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.112701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.112711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.112879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.113090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.113100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.113259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.113417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.113426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.113566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.113702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.113712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.113786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.113974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.113983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.114210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.114410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.114420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.114620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.114759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.114768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.114929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.115146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.115156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.115282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.115414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.115424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.115577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.115719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.115728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.115927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.116075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.116084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.116304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.116543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.116552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.116705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.116793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.116802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.116946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.117070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.117080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.117278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.117436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.117446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.117653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.117890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.117900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.118098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.118287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.118297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.118534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.118613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.118622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.118785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.118935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.118945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.119094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.119298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.119308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.119548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.119680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.290 [2024-05-15 08:40:58.119689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.290 qpair failed and we were unable to recover it. 00:29:11.290 [2024-05-15 08:40:58.119823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.291 [2024-05-15 08:40:58.120038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.291 [2024-05-15 08:40:58.120047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.291 qpair failed and we were unable to recover it. 00:29:11.291 [2024-05-15 08:40:58.120217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.291 [2024-05-15 08:40:58.120370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.291 [2024-05-15 08:40:58.120379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.291 qpair failed and we were unable to recover it. 00:29:11.291 [2024-05-15 08:40:58.120542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.291 [2024-05-15 08:40:58.120748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.291 [2024-05-15 08:40:58.120757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.291 qpair failed and we were unable to recover it. 00:29:11.291 [2024-05-15 08:40:58.120982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.291 [2024-05-15 08:40:58.121146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.291 [2024-05-15 08:40:58.121155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f86b4000b90 with addr=10.0.0.2, port=4420 00:29:11.291 qpair failed and we were unable to recover it. 00:29:11.291 A controller has encountered a failure and is being reset. 00:29:11.291 [2024-05-15 08:40:58.121380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.291 [2024-05-15 08:40:58.121629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.291 [2024-05-15 08:40:58.121644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5e770 with addr=10.0.0.2, port=4420 00:29:11.291 [2024-05-15 08:40:58.121654] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e770 is same with the state(5) to be set 00:29:11.291 [2024-05-15 08:40:58.121669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e770 (9): Bad file descriptor 00:29:11.291 [2024-05-15 08:40:58.121681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.291 [2024-05-15 08:40:58.121692] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.291 [2024-05-15 08:40:58.121705] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.291 Unable to reset the controller. 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.858 Malloc0 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.858 [2024-05-15 08:40:58.640030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.858 [2024-05-15 08:40:58.668069] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:11.858 [2024-05-15 08:40:58.668303] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.858 08:40:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@58 -- # wait 453536 00:29:12.424 Controller properly reset. 00:29:17.689 Initializing NVMe Controllers 00:29:17.689 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:17.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:17.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:17.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:17.689 Initialization complete. Launching workers. 00:29:17.689 Starting thread on core 1 00:29:17.689 Starting thread on core 2 00:29:17.689 Starting thread on core 3 00:29:17.689 Starting thread on core 0 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@59 -- # sync 00:29:17.689 00:29:17.689 real 0m11.309s 00:29:17.689 user 0m36.725s 00:29:17.689 sys 0m5.684s 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.689 ************************************ 00:29:17.689 END TEST nvmf_target_disconnect_tc2 00:29:17.689 ************************************ 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@85 -- # nvmftestfini 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:17.689 rmmod nvme_tcp 00:29:17.689 rmmod nvme_fabrics 00:29:17.689 rmmod nvme_keyring 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 454079 ']' 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 454079 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 454079 ']' 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 454079 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 454079 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 454079' 00:29:17.689 killing process with pid 454079 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 454079 00:29:17.689 [2024-05-15 08:41:04.183300] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 454079 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.689 08:41:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:17.690 08:41:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.588 08:41:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:19.588 00:29:19.588 real 0m19.326s 00:29:19.588 user 1m3.800s 00:29:19.588 sys 0m10.150s 00:29:19.588 08:41:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:19.588 08:41:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:19.588 ************************************ 00:29:19.588 END TEST nvmf_target_disconnect 00:29:19.588 ************************************ 00:29:19.588 08:41:06 nvmf_tcp -- nvmf/nvmf.sh@124 -- # timing_exit host 00:29:19.588 08:41:06 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:19.588 08:41:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:19.588 08:41:06 nvmf_tcp -- nvmf/nvmf.sh@126 -- # trap - SIGINT SIGTERM EXIT 00:29:19.588 00:29:19.588 real 18m46.222s 00:29:19.588 user 41m15.562s 00:29:19.588 sys 5m49.739s 00:29:19.588 08:41:06 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:19.588 08:41:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:19.588 ************************************ 00:29:19.588 END TEST nvmf_tcp 00:29:19.588 ************************************ 00:29:19.588 08:41:06 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:29:19.588 08:41:06 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:19.588 08:41:06 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:19.588 08:41:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:19.588 08:41:06 -- common/autotest_common.sh@10 -- # set +x 00:29:19.847 ************************************ 00:29:19.847 START TEST spdkcli_nvmf_tcp 00:29:19.847 ************************************ 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:19.847 * Looking for test storage... 00:29:19.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=455805 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 455805 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 455805 ']' 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:19.847 08:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:19.847 [2024-05-15 08:41:06.794953] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:29:19.847 [2024-05-15 08:41:06.795000] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid455805 ] 00:29:19.847 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.847 [2024-05-15 08:41:06.847555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:20.106 [2024-05-15 08:41:06.922836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.106 [2024-05-15 08:41:06.922839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.672 08:41:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:20.672 08:41:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:29:20.672 08:41:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:20.672 08:41:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:20.672 08:41:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:20.672 08:41:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:20.672 08:41:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:20.672 08:41:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:20.672 08:41:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:20.672 08:41:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:20.672 08:41:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:20.673 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:20.673 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:20.673 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:20.673 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:20.673 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:20.673 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:20.673 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:20.673 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:20.673 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:20.673 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:20.673 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:20.673 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:20.673 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:20.673 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:20.673 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:20.673 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:20.673 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:20.673 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:20.673 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:20.673 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:20.673 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:20.673 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:20.673 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:20.673 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:20.673 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:20.673 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:20.673 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:20.673 ' 00:29:23.201 [2024-05-15 08:41:10.068941] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.574 [2024-05-15 08:41:11.244498] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:24.574 [2024-05-15 08:41:11.244854] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:26.472 [2024-05-15 08:41:13.407526] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:28.372 [2024-05-15 08:41:15.265394] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:29.746 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:29.746 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:29.746 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:29.746 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:29.746 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:29.746 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:29.746 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:29.746 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:29.746 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:29.746 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:29.746 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:30.004 08:41:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:30.004 08:41:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:30.004 08:41:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:30.004 08:41:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:30.005 08:41:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:30.005 08:41:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:30.005 08:41:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:30.005 08:41:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:30.263 08:41:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:30.263 08:41:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:30.263 08:41:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:30.263 08:41:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:30.263 08:41:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:30.263 08:41:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:30.263 08:41:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:30.263 08:41:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:30.263 08:41:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:30.263 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:30.263 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:30.263 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:30.263 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:30.263 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:30.263 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:30.263 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:30.263 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:30.263 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:30.263 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:30.263 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:30.263 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:30.263 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:30.263 ' 00:29:35.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:35.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:35.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:35.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:35.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:35.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:35.527 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:35.527 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:35.527 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:35.527 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:35.527 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:35.527 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:35.527 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:35.527 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:35.527 08:41:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:35.527 08:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:35.527 08:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:35.527 08:41:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 455805 00:29:35.527 08:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 455805 ']' 00:29:35.527 08:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 455805 00:29:35.527 08:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:29:35.527 08:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:35.527 08:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 455805 00:29:35.527 08:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:35.528 08:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:35.528 08:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 455805' 00:29:35.528 killing process with pid 455805 00:29:35.528 08:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 455805 00:29:35.528 [2024-05-15 08:41:22.296725] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:35.528 08:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 455805 00:29:35.528 08:41:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:35.528 08:41:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:35.528 08:41:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 455805 ']' 00:29:35.528 08:41:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 455805 00:29:35.528 08:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 455805 ']' 00:29:35.528 08:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 455805 00:29:35.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (455805) - No such process 00:29:35.528 08:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 455805 is not found' 00:29:35.528 Process with pid 455805 is not found 00:29:35.528 08:41:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:35.528 08:41:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:35.528 08:41:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:35.528 00:29:35.528 real 0m15.881s 00:29:35.528 user 0m32.831s 00:29:35.528 sys 0m0.773s 00:29:35.528 08:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:35.528 08:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:35.528 ************************************ 00:29:35.528 END TEST spdkcli_nvmf_tcp 00:29:35.528 ************************************ 00:29:35.528 08:41:22 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:35.528 08:41:22 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:35.528 08:41:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:35.528 08:41:22 -- common/autotest_common.sh@10 -- # set +x 00:29:35.787 ************************************ 00:29:35.787 START TEST nvmf_identify_passthru 00:29:35.787 ************************************ 00:29:35.787 08:41:22 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:35.787 * Looking for test storage... 00:29:35.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:35.787 08:41:22 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:35.787 08:41:22 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.787 08:41:22 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.787 08:41:22 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.787 08:41:22 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.787 08:41:22 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.787 08:41:22 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.787 08:41:22 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:35.787 08:41:22 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:35.787 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:35.787 08:41:22 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:35.787 08:41:22 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.787 08:41:22 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.787 08:41:22 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.787 08:41:22 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.787 08:41:22 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.787 08:41:22 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.787 08:41:22 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:35.788 08:41:22 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.788 08:41:22 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:35.788 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:35.788 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:35.788 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:35.788 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:35.788 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:35.788 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.788 08:41:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:35.788 08:41:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.788 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:35.788 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:35.788 08:41:22 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:35.788 08:41:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:41.061 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:41.061 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:41.061 Found net devices under 0000:86:00.0: cvl_0_0 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.061 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:41.062 Found net devices under 0000:86:00.1: cvl_0_1 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:41.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:29:41.062 00:29:41.062 --- 10.0.0.2 ping statistics --- 00:29:41.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.062 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:29:41.062 00:29:41.062 --- 10.0.0.1 ping statistics --- 00:29:41.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.062 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:41.062 08:41:27 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:41.062 08:41:27 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:41.062 08:41:27 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:41.062 08:41:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:41.062 08:41:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:41.062 08:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:29:41.062 08:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:29:41.062 08:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:29:41.062 08:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:29:41.062 08:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:41.062 08:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:29:41.062 08:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:41.062 08:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:41.062 08:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:41.062 08:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:29:41.062 08:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5e:00.0 00:29:41.062 08:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:5e:00.0 00:29:41.062 08:41:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:29:41.062 08:41:27 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:29:41.062 08:41:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:29:41.062 08:41:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:41.062 08:41:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:41.062 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.253 08:41:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:29:45.253 08:41:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:45.253 08:41:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:29:45.253 08:41:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:45.253 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.444 08:41:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:29:49.444 08:41:35 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:49.444 08:41:35 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:49.444 08:41:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:49.444 08:41:35 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:49.444 08:41:35 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:49.444 08:41:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:49.444 08:41:35 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=462608 00:29:49.444 08:41:35 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:49.444 08:41:35 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:49.444 08:41:35 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 462608 00:29:49.444 08:41:35 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 462608 ']' 00:29:49.444 08:41:35 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.444 08:41:35 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:49.444 08:41:35 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.444 08:41:35 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:49.444 08:41:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:49.444 [2024-05-15 08:41:35.855221] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:29:49.444 [2024-05-15 08:41:35.855266] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.444 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.444 [2024-05-15 08:41:35.909820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:49.444 [2024-05-15 08:41:35.989481] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:49.444 [2024-05-15 08:41:35.989516] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:49.444 [2024-05-15 08:41:35.989526] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:49.444 [2024-05-15 08:41:35.989532] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:49.444 [2024-05-15 08:41:35.989537] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:49.444 [2024-05-15 08:41:35.989578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.444 [2024-05-15 08:41:35.989601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:49.444 [2024-05-15 08:41:35.989845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:49.444 [2024-05-15 08:41:35.989848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.703 08:41:36 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:49.703 08:41:36 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:29:49.703 08:41:36 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:49.703 08:41:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.703 08:41:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:49.703 INFO: Log level set to 20 00:29:49.703 INFO: Requests: 00:29:49.703 { 00:29:49.703 "jsonrpc": "2.0", 00:29:49.703 "method": "nvmf_set_config", 00:29:49.703 "id": 1, 00:29:49.703 "params": { 00:29:49.703 "admin_cmd_passthru": { 00:29:49.703 "identify_ctrlr": true 00:29:49.703 } 00:29:49.703 } 00:29:49.703 } 00:29:49.703 00:29:49.703 INFO: response: 00:29:49.703 { 00:29:49.703 "jsonrpc": "2.0", 00:29:49.703 "id": 1, 00:29:49.703 "result": true 00:29:49.703 } 00:29:49.703 00:29:49.703 08:41:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.703 08:41:36 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:49.703 08:41:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.703 08:41:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:49.703 INFO: Setting log level to 20 00:29:49.703 INFO: Setting log level to 20 00:29:49.703 INFO: Log level set to 20 00:29:49.703 INFO: Log level set to 20 00:29:49.703 INFO: Requests: 00:29:49.703 { 00:29:49.703 "jsonrpc": "2.0", 00:29:49.703 "method": "framework_start_init", 00:29:49.703 "id": 1 00:29:49.703 } 00:29:49.703 00:29:49.703 INFO: Requests: 00:29:49.703 { 00:29:49.703 "jsonrpc": "2.0", 00:29:49.703 "method": "framework_start_init", 00:29:49.704 "id": 1 00:29:49.704 } 00:29:49.704 00:29:49.962 [2024-05-15 08:41:36.755679] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:49.962 INFO: response: 00:29:49.962 { 00:29:49.962 "jsonrpc": "2.0", 00:29:49.962 "id": 1, 00:29:49.962 "result": true 00:29:49.962 } 00:29:49.962 00:29:49.962 INFO: response: 00:29:49.962 { 00:29:49.962 "jsonrpc": "2.0", 00:29:49.962 "id": 1, 00:29:49.962 "result": true 00:29:49.962 } 00:29:49.962 00:29:49.962 08:41:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.962 08:41:36 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:49.962 08:41:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.962 08:41:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:49.962 INFO: Setting log level to 40 00:29:49.962 INFO: Setting log level to 40 00:29:49.962 INFO: Setting log level to 40 00:29:49.962 [2024-05-15 08:41:36.769153] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.962 08:41:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.962 08:41:36 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:49.962 08:41:36 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:49.962 08:41:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:49.962 08:41:36 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:29:49.962 08:41:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.962 08:41:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:53.250 Nvme0n1 00:29:53.250 08:41:39 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.250 08:41:39 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:53.250 08:41:39 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.250 08:41:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:53.250 08:41:39 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.250 08:41:39 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:53.250 08:41:39 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.250 08:41:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:53.250 08:41:39 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.250 08:41:39 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.250 08:41:39 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.250 08:41:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:53.250 [2024-05-15 08:41:39.669006] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:53.250 [2024-05-15 08:41:39.669235] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.250 08:41:39 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.250 08:41:39 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:53.250 08:41:39 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.250 08:41:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:53.250 [ 00:29:53.250 { 00:29:53.250 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:53.250 "subtype": "Discovery", 00:29:53.250 "listen_addresses": [], 00:29:53.250 "allow_any_host": true, 00:29:53.250 "hosts": [] 00:29:53.250 }, 00:29:53.250 { 00:29:53.250 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:53.250 "subtype": "NVMe", 00:29:53.250 "listen_addresses": [ 00:29:53.250 { 00:29:53.250 "trtype": "TCP", 00:29:53.250 "adrfam": "IPv4", 00:29:53.250 "traddr": "10.0.0.2", 00:29:53.250 "trsvcid": "4420" 00:29:53.250 } 00:29:53.250 ], 00:29:53.250 "allow_any_host": true, 00:29:53.250 "hosts": [], 00:29:53.250 "serial_number": "SPDK00000000000001", 00:29:53.250 "model_number": "SPDK bdev Controller", 00:29:53.250 "max_namespaces": 1, 00:29:53.250 "min_cntlid": 1, 00:29:53.250 "max_cntlid": 65519, 00:29:53.250 "namespaces": [ 00:29:53.250 { 00:29:53.250 "nsid": 1, 00:29:53.250 "bdev_name": "Nvme0n1", 00:29:53.250 "name": "Nvme0n1", 00:29:53.250 "nguid": "A0CF6EFD7D4845BD97EFECDA3CFDDF79", 00:29:53.250 "uuid": "a0cf6efd-7d48-45bd-97ef-ecda3cfddf79" 00:29:53.250 } 00:29:53.250 ] 00:29:53.250 } 00:29:53.250 ] 00:29:53.250 08:41:39 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.250 08:41:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:53.250 08:41:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:53.250 08:41:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:53.250 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.250 08:41:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:29:53.250 08:41:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:53.250 08:41:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:53.250 08:41:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:53.250 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.250 08:41:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:29:53.250 08:41:40 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:29:53.250 08:41:40 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:29:53.250 08:41:40 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:53.250 08:41:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.250 08:41:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:53.250 08:41:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.250 08:41:40 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:53.250 08:41:40 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:53.250 08:41:40 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:53.250 08:41:40 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:53.250 08:41:40 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:53.250 08:41:40 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:53.250 08:41:40 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:53.250 08:41:40 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:53.250 rmmod nvme_tcp 00:29:53.250 rmmod nvme_fabrics 00:29:53.250 rmmod nvme_keyring 00:29:53.250 08:41:40 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:53.250 08:41:40 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:53.250 08:41:40 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:53.250 08:41:40 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 462608 ']' 00:29:53.250 08:41:40 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 462608 00:29:53.250 08:41:40 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 462608 ']' 00:29:53.250 08:41:40 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 462608 00:29:53.250 08:41:40 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:29:53.250 08:41:40 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:53.250 08:41:40 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 462608 00:29:53.509 08:41:40 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:53.509 08:41:40 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:53.509 08:41:40 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 462608' 00:29:53.509 killing process with pid 462608 00:29:53.509 08:41:40 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 462608 00:29:53.509 [2024-05-15 08:41:40.286980] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:53.509 08:41:40 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 462608 00:29:54.885 08:41:41 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:54.885 08:41:41 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:54.885 08:41:41 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:54.885 08:41:41 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:54.885 08:41:41 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:54.885 08:41:41 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.885 08:41:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:54.885 08:41:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.419 08:41:43 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:57.419 00:29:57.419 real 0m21.270s 00:29:57.419 user 0m30.208s 00:29:57.419 sys 0m4.322s 00:29:57.419 08:41:43 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:57.419 08:41:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:57.419 ************************************ 00:29:57.419 END TEST nvmf_identify_passthru 00:29:57.419 ************************************ 00:29:57.419 08:41:43 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:57.419 08:41:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:57.419 08:41:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:57.419 08:41:43 -- common/autotest_common.sh@10 -- # set +x 00:29:57.419 ************************************ 00:29:57.419 START TEST nvmf_dif 00:29:57.419 ************************************ 00:29:57.419 08:41:43 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:57.419 * Looking for test storage... 00:29:57.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:57.419 08:41:44 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.419 08:41:44 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:57.419 08:41:44 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.419 08:41:44 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.419 08:41:44 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.419 08:41:44 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.419 08:41:44 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.419 08:41:44 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.419 08:41:44 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.419 08:41:44 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.419 08:41:44 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.419 08:41:44 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.419 08:41:44 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:57.419 08:41:44 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:57.419 08:41:44 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.420 08:41:44 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.420 08:41:44 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.420 08:41:44 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.420 08:41:44 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.420 08:41:44 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.420 08:41:44 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.420 08:41:44 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:57.420 08:41:44 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:57.420 08:41:44 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:57.420 08:41:44 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:57.420 08:41:44 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:57.420 08:41:44 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:57.420 08:41:44 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.420 08:41:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:57.420 08:41:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:57.420 08:41:44 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:29:57.420 08:41:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:02.690 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:02.690 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:02.690 Found net devices under 0000:86:00.0: cvl_0_0 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:02.690 Found net devices under 0000:86:00.1: cvl_0_1 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:02.690 08:41:49 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:02.691 08:41:49 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:02.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:02.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:30:02.691 00:30:02.691 --- 10.0.0.2 ping statistics --- 00:30:02.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.691 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:30:02.691 08:41:49 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:02.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:02.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:30:02.691 00:30:02.691 --- 10.0.0.1 ping statistics --- 00:30:02.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.691 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:30:02.691 08:41:49 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:02.691 08:41:49 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:02.691 08:41:49 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:02.691 08:41:49 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:05.221 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:30:05.221 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:05.221 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:30:05.221 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:30:05.221 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:30:05.221 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:30:05.221 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:30:05.221 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:30:05.221 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:30:05.221 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:30:05.221 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:30:05.221 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:30:05.221 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:30:05.221 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:30:05.221 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:30:05.221 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:30:05.221 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:30:05.221 08:41:51 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.221 08:41:51 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:05.221 08:41:51 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:05.221 08:41:51 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.221 08:41:51 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:05.221 08:41:51 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:05.221 08:41:51 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:05.221 08:41:51 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:05.221 08:41:51 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:05.221 08:41:51 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:05.221 08:41:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:05.221 08:41:51 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=468283 00:30:05.221 08:41:51 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 468283 00:30:05.221 08:41:51 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:05.221 08:41:51 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 468283 ']' 00:30:05.221 08:41:51 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.221 08:41:51 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:05.221 08:41:51 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.221 08:41:51 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:05.221 08:41:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:05.221 [2024-05-15 08:41:52.038792] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:30:05.221 [2024-05-15 08:41:52.038836] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.221 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.221 [2024-05-15 08:41:52.095528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.221 [2024-05-15 08:41:52.175056] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.221 [2024-05-15 08:41:52.175090] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.221 [2024-05-15 08:41:52.175100] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.221 [2024-05-15 08:41:52.175106] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.221 [2024-05-15 08:41:52.175111] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.221 [2024-05-15 08:41:52.175128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.154 08:41:52 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:06.154 08:41:52 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:30:06.154 08:41:52 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:06.154 08:41:52 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:06.154 08:41:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:06.154 08:41:52 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.154 08:41:52 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:06.154 08:41:52 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:06.154 08:41:52 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.154 08:41:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:06.154 [2024-05-15 08:41:52.882595] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.154 08:41:52 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.154 08:41:52 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:06.154 08:41:52 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:06.154 08:41:52 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:06.154 08:41:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:06.154 ************************************ 00:30:06.154 START TEST fio_dif_1_default 00:30:06.154 ************************************ 00:30:06.154 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:30:06.154 08:41:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:06.154 08:41:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:06.154 08:41:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:06.154 08:41:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:06.154 08:41:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:06.154 08:41:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:06.154 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.154 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:06.154 bdev_null0 00:30:06.154 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.154 08:41:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:06.154 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.154 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:06.154 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:06.155 [2024-05-15 08:41:52.958751] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:06.155 [2024-05-15 08:41:52.958953] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:06.155 { 00:30:06.155 "params": { 00:30:06.155 "name": "Nvme$subsystem", 00:30:06.155 "trtype": "$TEST_TRANSPORT", 00:30:06.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.155 "adrfam": "ipv4", 00:30:06.155 "trsvcid": "$NVMF_PORT", 00:30:06.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.155 "hdgst": ${hdgst:-false}, 00:30:06.155 "ddgst": ${ddgst:-false} 00:30:06.155 }, 00:30:06.155 "method": "bdev_nvme_attach_controller" 00:30:06.155 } 00:30:06.155 EOF 00:30:06.155 )") 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:06.155 08:41:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:06.155 "params": { 00:30:06.155 "name": "Nvme0", 00:30:06.155 "trtype": "tcp", 00:30:06.155 "traddr": "10.0.0.2", 00:30:06.155 "adrfam": "ipv4", 00:30:06.155 "trsvcid": "4420", 00:30:06.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:06.155 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:06.155 "hdgst": false, 00:30:06.155 "ddgst": false 00:30:06.155 }, 00:30:06.155 "method": "bdev_nvme_attach_controller" 00:30:06.155 }' 00:30:06.155 08:41:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:06.155 08:41:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:06.155 08:41:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:06.155 08:41:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:06.155 08:41:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:06.155 08:41:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:06.155 08:41:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:06.155 08:41:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:06.155 08:41:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:06.155 08:41:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:06.412 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:06.412 fio-3.35 00:30:06.412 Starting 1 thread 00:30:06.412 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.605 00:30:18.605 filename0: (groupid=0, jobs=1): err= 0: pid=468665: Wed May 15 08:42:03 2024 00:30:18.605 read: IOPS=99, BW=399KiB/s (409kB/s)(4000KiB/10023msec) 00:30:18.605 slat (nsec): min=4317, max=45585, avg=6353.15, stdev=1492.43 00:30:18.605 clat (usec): min=598, max=45333, avg=40071.70, stdev=6196.85 00:30:18.605 lat (usec): min=604, max=45354, avg=40078.06, stdev=6196.87 00:30:18.605 clat percentiles (usec): 00:30:18.605 | 1.00th=[ 627], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:30:18.605 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:18.605 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:18.605 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:30:18.605 | 99.99th=[45351] 00:30:18.605 bw ( KiB/s): min= 384, max= 448, per=99.73%, avg=398.40, stdev=19.35, samples=20 00:30:18.605 iops : min= 96, max= 112, avg=99.60, stdev= 4.84, samples=20 00:30:18.605 lat (usec) : 750=2.40% 00:30:18.605 lat (msec) : 50=97.60% 00:30:18.605 cpu : usr=94.82%, sys=4.93%, ctx=19, majf=0, minf=250 00:30:18.605 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:18.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.605 issued rwts: total=1000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:18.605 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:18.605 00:30:18.605 Run status group 0 (all jobs): 00:30:18.605 READ: bw=399KiB/s (409kB/s), 399KiB/s-399KiB/s (409kB/s-409kB/s), io=4000KiB (4096kB), run=10023-10023msec 00:30:18.605 08:42:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:18.605 08:42:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:18.605 08:42:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:18.605 08:42:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:18.605 08:42:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:18.605 08:42:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:18.605 08:42:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.605 08:42:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.605 00:30:18.605 real 0m11.094s 00:30:18.605 user 0m16.188s 00:30:18.605 sys 0m0.821s 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:18.605 ************************************ 00:30:18.605 END TEST fio_dif_1_default 00:30:18.605 ************************************ 00:30:18.605 08:42:04 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:18.605 08:42:04 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:18.605 08:42:04 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:18.605 08:42:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:18.605 ************************************ 00:30:18.605 START TEST fio_dif_1_multi_subsystems 00:30:18.605 ************************************ 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:18.605 bdev_null0 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:18.605 [2024-05-15 08:42:04.133281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:18.605 bdev_null1 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.605 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:18.606 { 00:30:18.606 "params": { 00:30:18.606 "name": "Nvme$subsystem", 00:30:18.606 "trtype": "$TEST_TRANSPORT", 00:30:18.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:18.606 "adrfam": "ipv4", 00:30:18.606 "trsvcid": "$NVMF_PORT", 00:30:18.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:18.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:18.606 "hdgst": ${hdgst:-false}, 00:30:18.606 "ddgst": ${ddgst:-false} 00:30:18.606 }, 00:30:18.606 "method": "bdev_nvme_attach_controller" 00:30:18.606 } 00:30:18.606 EOF 00:30:18.606 )") 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:18.606 { 00:30:18.606 "params": { 00:30:18.606 "name": "Nvme$subsystem", 00:30:18.606 "trtype": "$TEST_TRANSPORT", 00:30:18.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:18.606 "adrfam": "ipv4", 00:30:18.606 "trsvcid": "$NVMF_PORT", 00:30:18.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:18.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:18.606 "hdgst": ${hdgst:-false}, 00:30:18.606 "ddgst": ${ddgst:-false} 00:30:18.606 }, 00:30:18.606 "method": "bdev_nvme_attach_controller" 00:30:18.606 } 00:30:18.606 EOF 00:30:18.606 )") 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:18.606 "params": { 00:30:18.606 "name": "Nvme0", 00:30:18.606 "trtype": "tcp", 00:30:18.606 "traddr": "10.0.0.2", 00:30:18.606 "adrfam": "ipv4", 00:30:18.606 "trsvcid": "4420", 00:30:18.606 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:18.606 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:18.606 "hdgst": false, 00:30:18.606 "ddgst": false 00:30:18.606 }, 00:30:18.606 "method": "bdev_nvme_attach_controller" 00:30:18.606 },{ 00:30:18.606 "params": { 00:30:18.606 "name": "Nvme1", 00:30:18.606 "trtype": "tcp", 00:30:18.606 "traddr": "10.0.0.2", 00:30:18.606 "adrfam": "ipv4", 00:30:18.606 "trsvcid": "4420", 00:30:18.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:18.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:18.606 "hdgst": false, 00:30:18.606 "ddgst": false 00:30:18.606 }, 00:30:18.606 "method": "bdev_nvme_attach_controller" 00:30:18.606 }' 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:18.606 08:42:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:18.606 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:18.606 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:18.606 fio-3.35 00:30:18.606 Starting 2 threads 00:30:18.606 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.583 00:30:28.584 filename0: (groupid=0, jobs=1): err= 0: pid=470761: Wed May 15 08:42:15 2024 00:30:28.584 read: IOPS=98, BW=393KiB/s (403kB/s)(3936KiB/10009msec) 00:30:28.584 slat (nsec): min=6224, max=31885, avg=7957.57, stdev=2694.75 00:30:28.584 clat (usec): min=416, max=42002, avg=40663.44, stdev=3644.19 00:30:28.584 lat (usec): min=423, max=42014, avg=40671.40, stdev=3644.20 00:30:28.584 clat percentiles (usec): 00:30:28.584 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:30:28.584 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:28.584 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:28.584 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:28.584 | 99.99th=[42206] 00:30:28.584 bw ( KiB/s): min= 384, max= 448, per=33.97%, avg=392.00, stdev=17.60, samples=20 00:30:28.584 iops : min= 96, max= 112, avg=98.00, stdev= 4.40, samples=20 00:30:28.584 lat (usec) : 500=0.71%, 750=0.10% 00:30:28.584 lat (msec) : 50=99.19% 00:30:28.584 cpu : usr=97.93%, sys=1.81%, ctx=13, majf=0, minf=130 00:30:28.584 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.584 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.584 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:28.584 filename1: (groupid=0, jobs=1): err= 0: pid=470762: Wed May 15 08:42:15 2024 00:30:28.584 read: IOPS=190, BW=761KiB/s (780kB/s)(7632KiB/10025msec) 00:30:28.584 slat (nsec): min=6196, max=44963, avg=7297.80, stdev=2086.95 00:30:28.584 clat (usec): min=395, max=42646, avg=20995.57, stdev=20494.72 00:30:28.584 lat (usec): min=402, max=42653, avg=21002.86, stdev=20494.07 00:30:28.584 clat percentiles (usec): 00:30:28.584 | 1.00th=[ 408], 5.00th=[ 412], 10.00th=[ 420], 20.00th=[ 429], 00:30:28.584 | 30.00th=[ 453], 40.00th=[ 498], 50.00th=[40633], 60.00th=[41157], 00:30:28.584 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:28.584 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:30:28.584 | 99.99th=[42730] 00:30:28.584 bw ( KiB/s): min= 704, max= 768, per=65.95%, avg=761.60, stdev=19.70, samples=20 00:30:28.584 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:30:28.584 lat (usec) : 500=40.88%, 750=8.96%, 1000=0.05% 00:30:28.584 lat (msec) : 50=50.10% 00:30:28.584 cpu : usr=97.73%, sys=2.03%, ctx=9, majf=0, minf=133 00:30:28.584 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.584 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.584 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:28.584 00:30:28.584 Run status group 0 (all jobs): 00:30:28.584 READ: bw=1154KiB/s (1182kB/s), 393KiB/s-761KiB/s (403kB/s-780kB/s), io=11.3MiB (11.8MB), run=10009-10025msec 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.584 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:28.843 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.843 00:30:28.843 real 0m11.521s 00:30:28.843 user 0m26.771s 00:30:28.843 sys 0m0.746s 00:30:28.843 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:28.843 08:42:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:28.843 ************************************ 00:30:28.843 END TEST fio_dif_1_multi_subsystems 00:30:28.843 ************************************ 00:30:28.843 08:42:15 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:28.843 08:42:15 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:28.843 08:42:15 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:28.843 08:42:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:28.843 ************************************ 00:30:28.843 START TEST fio_dif_rand_params 00:30:28.843 ************************************ 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.843 bdev_null0 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.843 [2024-05-15 08:42:15.722218] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:28.843 { 00:30:28.843 "params": { 00:30:28.843 "name": "Nvme$subsystem", 00:30:28.843 "trtype": "$TEST_TRANSPORT", 00:30:28.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.843 "adrfam": "ipv4", 00:30:28.843 "trsvcid": "$NVMF_PORT", 00:30:28.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.843 "hdgst": ${hdgst:-false}, 00:30:28.843 "ddgst": ${ddgst:-false} 00:30:28.843 }, 00:30:28.843 "method": "bdev_nvme_attach_controller" 00:30:28.843 } 00:30:28.843 EOF 00:30:28.843 )") 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:28.843 "params": { 00:30:28.843 "name": "Nvme0", 00:30:28.843 "trtype": "tcp", 00:30:28.843 "traddr": "10.0.0.2", 00:30:28.843 "adrfam": "ipv4", 00:30:28.843 "trsvcid": "4420", 00:30:28.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:28.843 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:28.843 "hdgst": false, 00:30:28.843 "ddgst": false 00:30:28.843 }, 00:30:28.843 "method": "bdev_nvme_attach_controller" 00:30:28.843 }' 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:28.843 08:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:29.101 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:29.101 ... 00:30:29.101 fio-3.35 00:30:29.101 Starting 3 threads 00:30:29.101 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.665 00:30:35.665 filename0: (groupid=0, jobs=1): err= 0: pid=473118: Wed May 15 08:42:21 2024 00:30:35.665 read: IOPS=314, BW=39.3MiB/s (41.2MB/s)(199MiB/5046msec) 00:30:35.665 slat (nsec): min=6423, max=44880, avg=10924.58, stdev=2426.12 00:30:35.665 clat (usec): min=3205, max=50290, avg=9491.90, stdev=6287.72 00:30:35.665 lat (usec): min=3213, max=50302, avg=9502.82, stdev=6287.72 00:30:35.665 clat percentiles (usec): 00:30:35.665 | 1.00th=[ 3621], 5.00th=[ 5800], 10.00th=[ 6587], 20.00th=[ 7635], 00:30:35.665 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:30:35.665 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10421], 95.00th=[11076], 00:30:35.665 | 99.00th=[47449], 99.50th=[49021], 99.90th=[50070], 99.95th=[50070], 00:30:35.665 | 99.99th=[50070] 00:30:35.665 bw ( KiB/s): min=27081, max=44544, per=35.23%, avg=40596.10, stdev=5018.76, samples=10 00:30:35.665 iops : min= 211, max= 348, avg=317.10, stdev=39.38, samples=10 00:30:35.665 lat (msec) : 4=2.64%, 10=81.80%, 20=12.97%, 50=2.52%, 100=0.06% 00:30:35.665 cpu : usr=94.93%, sys=4.78%, ctx=12, majf=0, minf=129 00:30:35.665 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.665 issued rwts: total=1588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.665 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:35.665 filename0: (groupid=0, jobs=1): err= 0: pid=473119: Wed May 15 08:42:21 2024 00:30:35.665 read: IOPS=290, BW=36.3MiB/s (38.0MB/s)(182MiB/5003msec) 00:30:35.665 slat (nsec): min=6383, max=30359, avg=11248.17, stdev=2253.96 00:30:35.665 clat (usec): min=3364, max=50344, avg=10322.33, stdev=6756.78 00:30:35.665 lat (usec): min=3371, max=50354, avg=10333.57, stdev=6756.70 00:30:35.665 clat percentiles (usec): 00:30:35.665 | 1.00th=[ 3654], 5.00th=[ 5473], 10.00th=[ 6718], 20.00th=[ 8225], 00:30:35.665 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:30:35.665 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11469], 95.00th=[12125], 00:30:35.665 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50070], 99.95th=[50594], 00:30:35.665 | 99.99th=[50594] 00:30:35.665 bw ( KiB/s): min=31232, max=42752, per=32.15%, avg=37042.22, stdev=3897.59, samples=9 00:30:35.665 iops : min= 244, max= 334, avg=289.33, stdev=30.50, samples=9 00:30:35.665 lat (msec) : 4=2.48%, 10=60.33%, 20=34.30%, 50=2.34%, 100=0.55% 00:30:35.665 cpu : usr=94.90%, sys=4.82%, ctx=12, majf=0, minf=79 00:30:35.665 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.665 issued rwts: total=1452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.665 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:35.665 filename0: (groupid=0, jobs=1): err= 0: pid=473120: Wed May 15 08:42:21 2024 00:30:35.665 read: IOPS=297, BW=37.2MiB/s (39.0MB/s)(188MiB/5044msec) 00:30:35.665 slat (nsec): min=6377, max=51623, avg=11118.09, stdev=2382.22 00:30:35.665 clat (usec): min=3574, max=50410, avg=10033.56, stdev=6015.92 00:30:35.665 lat (usec): min=3581, max=50422, avg=10044.68, stdev=6015.97 00:30:35.665 clat percentiles (usec): 00:30:35.665 | 1.00th=[ 3916], 5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 7832], 00:30:35.665 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9896], 00:30:35.665 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11600], 95.00th=[12125], 00:30:35.665 | 99.00th=[47449], 99.50th=[49546], 99.90th=[50594], 99.95th=[50594], 00:30:35.665 | 99.99th=[50594] 00:30:35.665 bw ( KiB/s): min=26112, max=47616, per=33.33%, avg=38400.00, stdev=5841.44, samples=10 00:30:35.665 iops : min= 204, max= 372, avg=300.00, stdev=45.64, samples=10 00:30:35.665 lat (msec) : 4=1.26%, 10=62.52%, 20=33.89%, 50=2.00%, 100=0.33% 00:30:35.665 cpu : usr=95.26%, sys=4.44%, ctx=11, majf=0, minf=115 00:30:35.665 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.665 issued rwts: total=1502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.665 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:35.665 00:30:35.665 Run status group 0 (all jobs): 00:30:35.665 READ: bw=113MiB/s (118MB/s), 36.3MiB/s-39.3MiB/s (38.0MB/s-41.2MB/s), io=568MiB (595MB), run=5003-5046msec 00:30:35.665 08:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:35.665 08:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:35.665 08:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:35.665 08:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:35.665 08:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:35.665 08:42:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:35.665 08:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.665 08:42:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:35.665 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.665 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:35.665 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.665 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:35.665 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.665 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:35.665 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:35.665 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:35.665 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:35.665 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:35.665 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:35.665 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:35.665 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:35.665 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:35.665 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:35.666 bdev_null0 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:35.666 [2024-05-15 08:42:22.049565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:35.666 bdev_null1 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:35.666 bdev_null2 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:35.666 { 00:30:35.666 "params": { 00:30:35.666 "name": "Nvme$subsystem", 00:30:35.666 "trtype": "$TEST_TRANSPORT", 00:30:35.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.666 "adrfam": "ipv4", 00:30:35.666 "trsvcid": "$NVMF_PORT", 00:30:35.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.666 "hdgst": ${hdgst:-false}, 00:30:35.666 "ddgst": ${ddgst:-false} 00:30:35.666 }, 00:30:35.666 "method": "bdev_nvme_attach_controller" 00:30:35.666 } 00:30:35.666 EOF 00:30:35.666 )") 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:35.666 { 00:30:35.666 "params": { 00:30:35.666 "name": "Nvme$subsystem", 00:30:35.666 "trtype": "$TEST_TRANSPORT", 00:30:35.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.666 "adrfam": "ipv4", 00:30:35.666 "trsvcid": "$NVMF_PORT", 00:30:35.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.666 "hdgst": ${hdgst:-false}, 00:30:35.666 "ddgst": ${ddgst:-false} 00:30:35.666 }, 00:30:35.666 "method": "bdev_nvme_attach_controller" 00:30:35.666 } 00:30:35.666 EOF 00:30:35.666 )") 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:35.666 08:42:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:35.666 { 00:30:35.666 "params": { 00:30:35.666 "name": "Nvme$subsystem", 00:30:35.666 "trtype": "$TEST_TRANSPORT", 00:30:35.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.666 "adrfam": "ipv4", 00:30:35.666 "trsvcid": "$NVMF_PORT", 00:30:35.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.666 "hdgst": ${hdgst:-false}, 00:30:35.666 "ddgst": ${ddgst:-false} 00:30:35.666 }, 00:30:35.666 "method": "bdev_nvme_attach_controller" 00:30:35.666 } 00:30:35.666 EOF 00:30:35.666 )") 00:30:35.667 08:42:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:35.667 08:42:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:35.667 08:42:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:35.667 08:42:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:35.667 08:42:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:35.667 "params": { 00:30:35.667 "name": "Nvme0", 00:30:35.667 "trtype": "tcp", 00:30:35.667 "traddr": "10.0.0.2", 00:30:35.667 "adrfam": "ipv4", 00:30:35.667 "trsvcid": "4420", 00:30:35.667 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:35.667 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:35.667 "hdgst": false, 00:30:35.667 "ddgst": false 00:30:35.667 }, 00:30:35.667 "method": "bdev_nvme_attach_controller" 00:30:35.667 },{ 00:30:35.667 "params": { 00:30:35.667 "name": "Nvme1", 00:30:35.667 "trtype": "tcp", 00:30:35.667 "traddr": "10.0.0.2", 00:30:35.667 "adrfam": "ipv4", 00:30:35.667 "trsvcid": "4420", 00:30:35.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:35.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:35.667 "hdgst": false, 00:30:35.667 "ddgst": false 00:30:35.667 }, 00:30:35.667 "method": "bdev_nvme_attach_controller" 00:30:35.667 },{ 00:30:35.667 "params": { 00:30:35.667 "name": "Nvme2", 00:30:35.667 "trtype": "tcp", 00:30:35.667 "traddr": "10.0.0.2", 00:30:35.667 "adrfam": "ipv4", 00:30:35.667 "trsvcid": "4420", 00:30:35.667 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:35.667 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:35.667 "hdgst": false, 00:30:35.667 "ddgst": false 00:30:35.667 }, 00:30:35.667 "method": "bdev_nvme_attach_controller" 00:30:35.667 }' 00:30:35.667 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:35.667 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:35.667 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:35.667 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:35.667 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:35.667 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:35.667 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:35.667 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:35.667 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:35.667 08:42:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:35.667 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:35.667 ... 00:30:35.667 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:35.667 ... 00:30:35.667 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:35.667 ... 00:30:35.667 fio-3.35 00:30:35.667 Starting 24 threads 00:30:35.667 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.866 00:30:47.866 filename0: (groupid=0, jobs=1): err= 0: pid=474357: Wed May 15 08:42:33 2024 00:30:47.866 read: IOPS=575, BW=2301KiB/s (2357kB/s)(22.5MiB/10011msec) 00:30:47.866 slat (usec): min=7, max=104, avg=33.51, stdev=20.00 00:30:47.866 clat (usec): min=5470, max=50455, avg=27540.15, stdev=2513.47 00:30:47.866 lat (usec): min=5488, max=50491, avg=27573.66, stdev=2514.11 00:30:47.866 clat percentiles (usec): 00:30:47.866 | 1.00th=[14877], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:30:47.866 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:30:47.866 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:47.866 | 99.00th=[28967], 99.50th=[33424], 99.90th=[50070], 99.95th=[50594], 00:30:47.866 | 99.99th=[50594] 00:30:47.866 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2297.60, stdev=77.42, samples=20 00:30:47.866 iops : min= 544, max= 640, avg=574.40, stdev=19.35, samples=20 00:30:47.866 lat (msec) : 10=0.83%, 20=0.56%, 50=98.44%, 100=0.17% 00:30:47.866 cpu : usr=99.03%, sys=0.61%, ctx=18, majf=0, minf=43 00:30:47.866 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:47.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.866 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.866 issued rwts: total=5760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.866 filename0: (groupid=0, jobs=1): err= 0: pid=474358: Wed May 15 08:42:33 2024 00:30:47.866 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.2MiB/10003msec) 00:30:47.866 slat (usec): min=6, max=125, avg=44.41, stdev=23.92 00:30:47.866 clat (usec): min=12299, max=80744, avg=27669.36, stdev=3002.61 00:30:47.866 lat (usec): min=12328, max=80761, avg=27713.76, stdev=3000.77 00:30:47.866 clat percentiles (usec): 00:30:47.866 | 1.00th=[26870], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:30:47.866 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:30:47.866 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:30:47.866 | 99.00th=[30016], 99.50th=[32900], 99.90th=[80217], 99.95th=[80217], 00:30:47.866 | 99.99th=[81265] 00:30:47.866 bw ( KiB/s): min= 2048, max= 2304, per=4.13%, avg=2270.32, stdev=71.93, samples=19 00:30:47.866 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:30:47.866 lat (msec) : 20=0.32%, 50=99.40%, 100=0.28% 00:30:47.866 cpu : usr=98.75%, sys=0.86%, ctx=16, majf=0, minf=32 00:30:47.866 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:47.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.866 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.866 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.866 filename0: (groupid=0, jobs=1): err= 0: pid=474359: Wed May 15 08:42:33 2024 00:30:47.866 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10007msec) 00:30:47.866 slat (usec): min=8, max=105, avg=49.32, stdev=21.68 00:30:47.866 clat (usec): min=16872, max=56596, avg=27605.14, stdev=1718.98 00:30:47.866 lat (usec): min=16905, max=56627, avg=27654.46, stdev=1717.37 00:30:47.866 clat percentiles (usec): 00:30:47.866 | 1.00th=[26870], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:30:47.866 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:30:47.866 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:30:47.866 | 99.00th=[28967], 99.50th=[32900], 99.90th=[56361], 99.95th=[56361], 00:30:47.866 | 99.99th=[56361] 00:30:47.866 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2278.40, stdev=52.53, samples=20 00:30:47.866 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:30:47.866 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:30:47.866 cpu : usr=98.84%, sys=0.77%, ctx=10, majf=0, minf=51 00:30:47.866 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:47.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.866 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.866 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.866 filename0: (groupid=0, jobs=1): err= 0: pid=474360: Wed May 15 08:42:33 2024 00:30:47.866 read: IOPS=570, BW=2281KiB/s (2336kB/s)(22.3MiB/10001msec) 00:30:47.866 slat (usec): min=4, max=106, avg=40.99, stdev=23.30 00:30:47.866 clat (usec): min=8529, max=69676, avg=27657.92, stdev=2782.90 00:30:47.866 lat (usec): min=8542, max=69688, avg=27698.91, stdev=2781.23 00:30:47.866 clat percentiles (usec): 00:30:47.866 | 1.00th=[22938], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:30:47.866 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:30:47.866 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28443], 00:30:47.867 | 99.00th=[33162], 99.50th=[38011], 99.90th=[69731], 99.95th=[69731], 00:30:47.867 | 99.99th=[69731] 00:30:47.867 bw ( KiB/s): min= 2052, max= 2304, per=4.12%, avg=2267.16, stdev=70.74, samples=19 00:30:47.867 iops : min= 513, max= 576, avg=566.79, stdev=17.68, samples=19 00:30:47.867 lat (msec) : 10=0.28%, 20=0.49%, 50=98.95%, 100=0.28% 00:30:47.867 cpu : usr=98.70%, sys=0.93%, ctx=16, majf=0, minf=49 00:30:47.867 IO depths : 1=5.9%, 2=11.8%, 4=24.1%, 8=51.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:30:47.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 issued rwts: total=5704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.867 filename0: (groupid=0, jobs=1): err= 0: pid=474361: Wed May 15 08:42:33 2024 00:30:47.867 read: IOPS=571, BW=2285KiB/s (2340kB/s)(22.3MiB/10009msec) 00:30:47.867 slat (nsec): min=6853, max=35894, avg=11829.42, stdev=4568.47 00:30:47.867 clat (usec): min=14252, max=66505, avg=27890.10, stdev=2120.25 00:30:47.867 lat (usec): min=14265, max=66534, avg=27901.93, stdev=2120.64 00:30:47.867 clat percentiles (usec): 00:30:47.867 | 1.00th=[22938], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:30:47.867 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:30:47.867 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:47.867 | 99.00th=[30278], 99.50th=[40109], 99.90th=[56361], 99.95th=[56361], 00:30:47.867 | 99.99th=[66323] 00:30:47.867 bw ( KiB/s): min= 2096, max= 2304, per=4.15%, avg=2280.80, stdev=58.61, samples=20 00:30:47.867 iops : min= 524, max= 576, avg=570.20, stdev=14.65, samples=20 00:30:47.867 lat (msec) : 20=0.63%, 50=99.09%, 100=0.28% 00:30:47.867 cpu : usr=98.88%, sys=0.73%, ctx=14, majf=0, minf=37 00:30:47.867 IO depths : 1=6.0%, 2=12.1%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:30:47.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 issued rwts: total=5718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.867 filename0: (groupid=0, jobs=1): err= 0: pid=474362: Wed May 15 08:42:33 2024 00:30:47.867 read: IOPS=573, BW=2294KiB/s (2349kB/s)(22.4MiB/10007msec) 00:30:47.867 slat (usec): min=7, max=106, avg=42.47, stdev=24.51 00:30:47.867 clat (usec): min=9117, max=67058, avg=27533.92, stdev=2955.20 00:30:47.867 lat (usec): min=9126, max=67078, avg=27576.39, stdev=2956.11 00:30:47.867 clat percentiles (usec): 00:30:47.867 | 1.00th=[16712], 5.00th=[25822], 10.00th=[27132], 20.00th=[27132], 00:30:47.867 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:30:47.867 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28443], 00:30:47.867 | 99.00th=[40109], 99.50th=[46400], 99.90th=[56361], 99.95th=[56886], 00:30:47.867 | 99.99th=[66847] 00:30:47.867 bw ( KiB/s): min= 2144, max= 2432, per=4.16%, avg=2288.80, stdev=67.58, samples=20 00:30:47.867 iops : min= 536, max= 608, avg=572.20, stdev=16.89, samples=20 00:30:47.867 lat (msec) : 10=0.10%, 20=1.88%, 50=97.73%, 100=0.28% 00:30:47.867 cpu : usr=98.87%, sys=0.74%, ctx=17, majf=0, minf=41 00:30:47.867 IO depths : 1=4.5%, 2=10.0%, 4=22.4%, 8=54.8%, 16=8.3%, 32=0.0%, >=64=0.0% 00:30:47.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 complete : 0=0.0%, 4=93.6%, 8=1.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 issued rwts: total=5738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.867 filename0: (groupid=0, jobs=1): err= 0: pid=474363: Wed May 15 08:42:33 2024 00:30:47.867 read: IOPS=576, BW=2305KiB/s (2360kB/s)(22.5MiB/10002msec) 00:30:47.867 slat (usec): min=4, max=112, avg=42.43, stdev=24.79 00:30:47.867 clat (usec): min=8550, max=70631, avg=27396.69, stdev=3632.50 00:30:47.867 lat (usec): min=8602, max=70644, avg=27439.12, stdev=3633.86 00:30:47.867 clat percentiles (usec): 00:30:47.867 | 1.00th=[15533], 5.00th=[22676], 10.00th=[26870], 20.00th=[27132], 00:30:47.867 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:30:47.867 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28443], 00:30:47.867 | 99.00th=[40109], 99.50th=[41157], 99.90th=[70779], 99.95th=[70779], 00:30:47.867 | 99.99th=[70779] 00:30:47.867 bw ( KiB/s): min= 2048, max= 2576, per=4.16%, avg=2289.68, stdev=103.81, samples=19 00:30:47.867 iops : min= 512, max= 644, avg=572.42, stdev=25.95, samples=19 00:30:47.867 lat (msec) : 10=0.28%, 20=2.95%, 50=96.50%, 100=0.28% 00:30:47.867 cpu : usr=98.72%, sys=0.90%, ctx=16, majf=0, minf=37 00:30:47.867 IO depths : 1=2.8%, 2=8.3%, 4=22.6%, 8=56.4%, 16=9.9%, 32=0.0%, >=64=0.0% 00:30:47.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 issued rwts: total=5764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.867 filename0: (groupid=0, jobs=1): err= 0: pid=474365: Wed May 15 08:42:33 2024 00:30:47.867 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10007msec) 00:30:47.867 slat (usec): min=11, max=109, avg=49.89, stdev=21.71 00:30:47.867 clat (usec): min=16938, max=56572, avg=27589.06, stdev=1715.58 00:30:47.867 lat (usec): min=16954, max=56599, avg=27638.95, stdev=1714.14 00:30:47.867 clat percentiles (usec): 00:30:47.867 | 1.00th=[26870], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:30:47.867 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:30:47.867 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:30:47.867 | 99.00th=[28967], 99.50th=[32900], 99.90th=[56361], 99.95th=[56361], 00:30:47.867 | 99.99th=[56361] 00:30:47.867 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2278.40, stdev=52.53, samples=20 00:30:47.867 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:30:47.867 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:30:47.867 cpu : usr=98.55%, sys=1.06%, ctx=19, majf=0, minf=31 00:30:47.867 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:47.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.867 filename1: (groupid=0, jobs=1): err= 0: pid=474366: Wed May 15 08:42:33 2024 00:30:47.867 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10007msec) 00:30:47.867 slat (usec): min=7, max=125, avg=49.55, stdev=22.37 00:30:47.867 clat (usec): min=15999, max=56655, avg=27570.62, stdev=1738.88 00:30:47.867 lat (usec): min=16007, max=56673, avg=27620.17, stdev=1738.15 00:30:47.867 clat percentiles (usec): 00:30:47.867 | 1.00th=[26870], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:30:47.867 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:30:47.867 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:30:47.867 | 99.00th=[28967], 99.50th=[32900], 99.90th=[56361], 99.95th=[56361], 00:30:47.867 | 99.99th=[56886] 00:30:47.867 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2278.40, stdev=52.53, samples=20 00:30:47.867 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:30:47.867 lat (msec) : 20=0.32%, 50=99.40%, 100=0.28% 00:30:47.867 cpu : usr=99.10%, sys=0.51%, ctx=14, majf=0, minf=48 00:30:47.867 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:47.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.867 filename1: (groupid=0, jobs=1): err= 0: pid=474367: Wed May 15 08:42:33 2024 00:30:47.867 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10002msec) 00:30:47.867 slat (nsec): min=5158, max=40706, avg=16981.98, stdev=5687.54 00:30:47.867 clat (usec): min=1588, max=69529, avg=27789.60, stdev=2842.88 00:30:47.867 lat (usec): min=1595, max=69541, avg=27806.58, stdev=2842.97 00:30:47.867 clat percentiles (usec): 00:30:47.867 | 1.00th=[23200], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:30:47.867 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:30:47.867 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:47.867 | 99.00th=[28967], 99.50th=[32900], 99.90th=[69731], 99.95th=[69731], 00:30:47.867 | 99.99th=[69731] 00:30:47.867 bw ( KiB/s): min= 2052, max= 2304, per=4.13%, avg=2270.53, stdev=71.25, samples=19 00:30:47.867 iops : min= 513, max= 576, avg=567.63, stdev=17.81, samples=19 00:30:47.867 lat (msec) : 2=0.24%, 20=0.56%, 50=98.92%, 100=0.28% 00:30:47.867 cpu : usr=98.94%, sys=0.69%, ctx=17, majf=0, minf=47 00:30:47.867 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:47.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 issued rwts: total=5726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.867 filename1: (groupid=0, jobs=1): err= 0: pid=474368: Wed May 15 08:42:33 2024 00:30:47.867 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10006msec) 00:30:47.867 slat (usec): min=6, max=107, avg=42.97, stdev=22.34 00:30:47.867 clat (usec): min=8552, max=73324, avg=27620.25, stdev=2786.61 00:30:47.867 lat (usec): min=8560, max=73342, avg=27663.22, stdev=2785.38 00:30:47.867 clat percentiles (usec): 00:30:47.867 | 1.00th=[26870], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:30:47.867 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:30:47.867 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:30:47.867 | 99.00th=[29230], 99.50th=[33424], 99.90th=[72877], 99.95th=[72877], 00:30:47.867 | 99.99th=[72877] 00:30:47.867 bw ( KiB/s): min= 2048, max= 2304, per=4.13%, avg=2270.32, stdev=71.93, samples=19 00:30:47.867 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:30:47.867 lat (msec) : 10=0.28%, 20=0.28%, 50=99.16%, 100=0.28% 00:30:47.867 cpu : usr=98.88%, sys=0.73%, ctx=14, majf=0, minf=42 00:30:47.867 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:47.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.867 filename1: (groupid=0, jobs=1): err= 0: pid=474369: Wed May 15 08:42:33 2024 00:30:47.867 read: IOPS=570, BW=2281KiB/s (2335kB/s)(22.3MiB/10001msec) 00:30:47.867 slat (usec): min=7, max=105, avg=45.04, stdev=22.79 00:30:47.867 clat (msec): min=8, max=100, avg=27.61, stdev= 3.50 00:30:47.867 lat (msec): min=8, max=100, avg=27.66, stdev= 3.50 00:30:47.867 clat percentiles (msec): 00:30:47.867 | 1.00th=[ 27], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:30:47.867 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:30:47.867 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 28], 95.00th=[ 29], 00:30:47.867 | 99.00th=[ 30], 99.50th=[ 34], 99.90th=[ 85], 99.95th=[ 85], 00:30:47.867 | 99.99th=[ 101] 00:30:47.867 bw ( KiB/s): min= 1971, max= 2304, per=4.12%, avg=2266.26, stdev=85.95, samples=19 00:30:47.867 iops : min= 492, max= 576, avg=566.53, stdev=21.63, samples=19 00:30:47.867 lat (msec) : 10=0.28%, 20=0.39%, 50=99.05%, 100=0.26%, 250=0.02% 00:30:47.867 cpu : usr=98.89%, sys=0.73%, ctx=11, majf=0, minf=45 00:30:47.867 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:47.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 issued rwts: total=5702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.867 filename1: (groupid=0, jobs=1): err= 0: pid=474370: Wed May 15 08:42:33 2024 00:30:47.867 read: IOPS=584, BW=2338KiB/s (2394kB/s)(22.9MiB/10020msec) 00:30:47.867 slat (nsec): min=6769, max=63076, avg=11796.12, stdev=4020.77 00:30:47.867 clat (usec): min=2405, max=48144, avg=27277.10, stdev=3797.63 00:30:47.867 lat (usec): min=2414, max=48156, avg=27288.90, stdev=3797.70 00:30:47.867 clat percentiles (usec): 00:30:47.867 | 1.00th=[ 5538], 5.00th=[27132], 10.00th=[27657], 20.00th=[27657], 00:30:47.867 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:30:47.867 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:47.867 | 99.00th=[28967], 99.50th=[32900], 99.90th=[47973], 99.95th=[47973], 00:30:47.867 | 99.99th=[47973] 00:30:47.867 bw ( KiB/s): min= 2176, max= 3200, per=4.25%, avg=2336.00, stdev=211.25, samples=20 00:30:47.867 iops : min= 544, max= 800, avg=584.00, stdev=52.81, samples=20 00:30:47.867 lat (msec) : 4=0.96%, 10=1.40%, 20=0.65%, 50=96.99% 00:30:47.867 cpu : usr=98.84%, sys=0.76%, ctx=13, majf=0, minf=51 00:30:47.867 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:30:47.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 issued rwts: total=5856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.867 filename1: (groupid=0, jobs=1): err= 0: pid=474371: Wed May 15 08:42:33 2024 00:30:47.867 read: IOPS=570, BW=2284KiB/s (2339kB/s)(22.3MiB/10001msec) 00:30:47.867 slat (usec): min=4, max=110, avg=45.21, stdev=24.31 00:30:47.867 clat (usec): min=8497, max=99547, avg=27569.61, stdev=3578.94 00:30:47.867 lat (usec): min=8516, max=99563, avg=27614.82, stdev=3578.21 00:30:47.867 clat percentiles (usec): 00:30:47.867 | 1.00th=[21365], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:30:47.867 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:30:47.867 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:30:47.867 | 99.00th=[32900], 99.50th=[34866], 99.90th=[84411], 99.95th=[84411], 00:30:47.867 | 99.99th=[99091] 00:30:47.867 bw ( KiB/s): min= 1987, max= 2304, per=4.13%, avg=2269.63, stdev=80.70, samples=19 00:30:47.867 iops : min= 496, max= 576, avg=567.37, stdev=20.32, samples=19 00:30:47.867 lat (msec) : 10=0.28%, 20=0.44%, 50=99.00%, 100=0.28% 00:30:47.867 cpu : usr=98.98%, sys=0.64%, ctx=12, majf=0, minf=55 00:30:47.867 IO depths : 1=5.9%, 2=11.9%, 4=23.9%, 8=51.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:30:47.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 issued rwts: total=5710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.867 filename1: (groupid=0, jobs=1): err= 0: pid=474372: Wed May 15 08:42:33 2024 00:30:47.867 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:30:47.867 slat (usec): min=7, max=106, avg=43.47, stdev=22.77 00:30:47.867 clat (usec): min=12536, max=81807, avg=27703.68, stdev=3037.20 00:30:47.867 lat (usec): min=12607, max=81834, avg=27747.15, stdev=3035.16 00:30:47.867 clat percentiles (usec): 00:30:47.867 | 1.00th=[26870], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:30:47.867 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:30:47.867 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:30:47.867 | 99.00th=[29230], 99.50th=[32900], 99.90th=[81265], 99.95th=[81265], 00:30:47.867 | 99.99th=[82314] 00:30:47.867 bw ( KiB/s): min= 1923, max= 2304, per=4.13%, avg=2270.47, stdev=93.27, samples=19 00:30:47.867 iops : min= 480, max= 576, avg=567.58, stdev=23.47, samples=19 00:30:47.867 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:30:47.867 cpu : usr=98.96%, sys=0.65%, ctx=21, majf=0, minf=56 00:30:47.867 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:30:47.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.867 filename1: (groupid=0, jobs=1): err= 0: pid=474373: Wed May 15 08:42:33 2024 00:30:47.867 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10007msec) 00:30:47.867 slat (nsec): min=7262, max=93228, avg=37378.56, stdev=14738.21 00:30:47.867 clat (usec): min=16065, max=56541, avg=27719.37, stdev=1703.23 00:30:47.867 lat (usec): min=16073, max=56566, avg=27756.75, stdev=1702.01 00:30:47.867 clat percentiles (usec): 00:30:47.867 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:30:47.867 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:30:47.867 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:30:47.867 | 99.00th=[28967], 99.50th=[33162], 99.90th=[56361], 99.95th=[56361], 00:30:47.867 | 99.99th=[56361] 00:30:47.867 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2278.40, stdev=52.53, samples=20 00:30:47.867 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:30:47.867 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:30:47.867 cpu : usr=98.79%, sys=0.80%, ctx=86, majf=0, minf=48 00:30:47.867 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:47.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.867 filename2: (groupid=0, jobs=1): err= 0: pid=474374: Wed May 15 08:42:33 2024 00:30:47.867 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10007msec) 00:30:47.867 slat (usec): min=7, max=118, avg=48.95, stdev=21.95 00:30:47.867 clat (usec): min=15962, max=56489, avg=27612.27, stdev=1723.38 00:30:47.867 lat (usec): min=15971, max=56520, avg=27661.22, stdev=1721.66 00:30:47.867 clat percentiles (usec): 00:30:47.867 | 1.00th=[26870], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:30:47.867 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:30:47.867 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:30:47.867 | 99.00th=[28967], 99.50th=[33162], 99.90th=[56361], 99.95th=[56361], 00:30:47.867 | 99.99th=[56361] 00:30:47.867 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2278.40, stdev=52.53, samples=20 00:30:47.867 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:30:47.867 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:30:47.867 cpu : usr=98.66%, sys=0.95%, ctx=13, majf=0, minf=47 00:30:47.867 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:47.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.867 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.867 filename2: (groupid=0, jobs=1): err= 0: pid=474375: Wed May 15 08:42:33 2024 00:30:47.867 read: IOPS=592, BW=2370KiB/s (2427kB/s)(23.2MiB/10014msec) 00:30:47.867 slat (nsec): min=6904, max=62858, avg=13041.07, stdev=4411.58 00:30:47.867 clat (usec): min=2492, max=50486, avg=26901.13, stdev=4667.61 00:30:47.867 lat (usec): min=2502, max=50501, avg=26914.17, stdev=4668.20 00:30:47.867 clat percentiles (usec): 00:30:47.867 | 1.00th=[ 5669], 5.00th=[17171], 10.00th=[24773], 20.00th=[27657], 00:30:47.867 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:30:47.867 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28967], 00:30:47.867 | 99.00th=[33162], 99.50th=[42206], 99.90th=[49546], 99.95th=[50070], 00:30:47.867 | 99.99th=[50594] 00:30:47.867 bw ( KiB/s): min= 2176, max= 3072, per=4.30%, avg=2367.20, stdev=195.67, samples=20 00:30:47.867 iops : min= 544, max= 768, avg=591.80, stdev=48.92, samples=20 00:30:47.867 lat (msec) : 4=0.54%, 10=2.29%, 20=3.19%, 50=93.92%, 100=0.07% 00:30:47.867 cpu : usr=98.78%, sys=0.83%, ctx=15, majf=0, minf=53 00:30:47.867 IO depths : 1=2.8%, 2=8.2%, 4=22.2%, 8=57.0%, 16=9.8%, 32=0.0%, >=64=0.0% 00:30:47.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.868 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.868 issued rwts: total=5934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.868 filename2: (groupid=0, jobs=1): err= 0: pid=474377: Wed May 15 08:42:33 2024 00:30:47.868 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10007msec) 00:30:47.868 slat (usec): min=6, max=103, avg=43.73, stdev=22.28 00:30:47.868 clat (usec): min=8375, max=84595, avg=27622.31, stdev=2960.87 00:30:47.868 lat (usec): min=8389, max=84612, avg=27666.04, stdev=2959.09 00:30:47.868 clat percentiles (usec): 00:30:47.868 | 1.00th=[26608], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:30:47.868 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:30:47.868 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:30:47.868 | 99.00th=[29230], 99.50th=[32900], 99.90th=[76022], 99.95th=[76022], 00:30:47.868 | 99.99th=[84411] 00:30:47.868 bw ( KiB/s): min= 2048, max= 2304, per=4.13%, avg=2270.32, stdev=71.93, samples=19 00:30:47.868 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:30:47.868 lat (msec) : 10=0.28%, 20=0.28%, 50=99.16%, 100=0.28% 00:30:47.868 cpu : usr=98.95%, sys=0.66%, ctx=12, majf=0, minf=41 00:30:47.868 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:30:47.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.868 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.868 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.868 filename2: (groupid=0, jobs=1): err= 0: pid=474378: Wed May 15 08:42:33 2024 00:30:47.868 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:30:47.868 slat (nsec): min=7149, max=42262, avg=17127.69, stdev=5683.63 00:30:47.868 clat (usec): min=14556, max=83544, avg=27948.07, stdev=3062.59 00:30:47.868 lat (usec): min=14566, max=83569, avg=27965.20, stdev=3062.31 00:30:47.868 clat percentiles (usec): 00:30:47.868 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:30:47.868 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:30:47.868 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:47.868 | 99.00th=[28967], 99.50th=[33162], 99.90th=[83362], 99.95th=[83362], 00:30:47.868 | 99.99th=[83362] 00:30:47.868 bw ( KiB/s): min= 1920, max= 2304, per=4.13%, avg=2270.32, stdev=93.89, samples=19 00:30:47.868 iops : min= 480, max= 576, avg=567.58, stdev=23.47, samples=19 00:30:47.868 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:30:47.868 cpu : usr=98.93%, sys=0.71%, ctx=15, majf=0, minf=42 00:30:47.868 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:47.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.868 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.868 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.868 filename2: (groupid=0, jobs=1): err= 0: pid=474379: Wed May 15 08:42:33 2024 00:30:47.868 read: IOPS=583, BW=2336KiB/s (2392kB/s)(22.8MiB/10011msec) 00:30:47.868 slat (usec): min=3, max=108, avg=16.67, stdev=11.50 00:30:47.868 clat (usec): min=5524, max=52532, avg=27244.98, stdev=3402.09 00:30:47.868 lat (usec): min=5540, max=52586, avg=27261.66, stdev=3403.40 00:30:47.868 clat percentiles (usec): 00:30:47.868 | 1.00th=[ 6587], 5.00th=[24511], 10.00th=[27132], 20.00th=[27657], 00:30:47.868 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:30:47.868 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:47.868 | 99.00th=[31065], 99.50th=[32900], 99.90th=[49546], 99.95th=[50070], 00:30:47.868 | 99.99th=[52691] 00:30:47.868 bw ( KiB/s): min= 2176, max= 2560, per=4.24%, avg=2332.00, stdev=96.40, samples=20 00:30:47.868 iops : min= 544, max= 640, avg=583.00, stdev=24.10, samples=20 00:30:47.868 lat (msec) : 10=1.44%, 20=2.33%, 50=96.20%, 100=0.03% 00:30:47.868 cpu : usr=98.74%, sys=0.86%, ctx=18, majf=0, minf=39 00:30:47.868 IO depths : 1=5.3%, 2=11.1%, 4=23.1%, 8=53.3%, 16=7.2%, 32=0.0%, >=64=0.0% 00:30:47.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.868 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.868 issued rwts: total=5846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.868 filename2: (groupid=0, jobs=1): err= 0: pid=474380: Wed May 15 08:42:33 2024 00:30:47.868 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10007msec) 00:30:47.868 slat (usec): min=8, max=120, avg=49.54, stdev=21.91 00:30:47.868 clat (usec): min=16876, max=56655, avg=27579.22, stdev=1723.20 00:30:47.868 lat (usec): min=16894, max=56679, avg=27628.77, stdev=1722.14 00:30:47.868 clat percentiles (usec): 00:30:47.868 | 1.00th=[26870], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:30:47.868 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:30:47.868 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:30:47.868 | 99.00th=[28967], 99.50th=[32900], 99.90th=[56361], 99.95th=[56361], 00:30:47.868 | 99.99th=[56886] 00:30:47.868 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2278.40, stdev=52.53, samples=20 00:30:47.868 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:30:47.868 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:30:47.868 cpu : usr=98.71%, sys=0.90%, ctx=18, majf=0, minf=39 00:30:47.868 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:47.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.868 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.868 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.868 filename2: (groupid=0, jobs=1): err= 0: pid=474381: Wed May 15 08:42:33 2024 00:30:47.868 read: IOPS=577, BW=2311KiB/s (2366kB/s)(22.6MiB/10009msec) 00:30:47.868 slat (usec): min=7, max=126, avg=39.92, stdev=26.76 00:30:47.868 clat (usec): min=5465, max=49702, avg=27386.86, stdev=2667.44 00:30:47.868 lat (usec): min=5476, max=49758, avg=27426.78, stdev=2668.70 00:30:47.868 clat percentiles (usec): 00:30:47.868 | 1.00th=[12780], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:30:47.868 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:30:47.868 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:30:47.868 | 99.00th=[28967], 99.50th=[33162], 99.90th=[48497], 99.95th=[48497], 00:30:47.868 | 99.99th=[49546] 00:30:47.868 bw ( KiB/s): min= 2176, max= 2560, per=4.19%, avg=2306.40, stdev=72.73, samples=20 00:30:47.868 iops : min= 544, max= 640, avg=576.60, stdev=18.18, samples=20 00:30:47.868 lat (msec) : 10=0.83%, 20=1.26%, 50=97.91% 00:30:47.868 cpu : usr=98.79%, sys=0.84%, ctx=18, majf=0, minf=94 00:30:47.868 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:30:47.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.868 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.868 issued rwts: total=5782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.868 filename2: (groupid=0, jobs=1): err= 0: pid=474382: Wed May 15 08:42:33 2024 00:30:47.868 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10007msec) 00:30:47.868 slat (usec): min=7, max=110, avg=48.80, stdev=22.08 00:30:47.868 clat (usec): min=14920, max=56958, avg=27602.58, stdev=1776.18 00:30:47.868 lat (usec): min=14975, max=56996, avg=27651.39, stdev=1774.74 00:30:47.868 clat percentiles (usec): 00:30:47.868 | 1.00th=[26870], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:30:47.868 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:30:47.868 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:30:47.868 | 99.00th=[28967], 99.50th=[33162], 99.90th=[56886], 99.95th=[56886], 00:30:47.868 | 99.99th=[56886] 00:30:47.868 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2278.40, stdev=52.53, samples=20 00:30:47.868 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:30:47.868 lat (msec) : 20=0.32%, 50=99.40%, 100=0.28% 00:30:47.868 cpu : usr=98.87%, sys=0.73%, ctx=12, majf=0, minf=47 00:30:47.868 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:47.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.868 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.868 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:47.868 00:30:47.868 Run status group 0 (all jobs): 00:30:47.868 READ: bw=53.7MiB/s (56.3MB/s), 2277KiB/s-2370KiB/s (2332kB/s-2427kB/s), io=538MiB (564MB), run=10001-10020msec 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:47.868 bdev_null0 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:47.868 [2024-05-15 08:42:33.814846] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:47.868 bdev_null1 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:47.868 { 00:30:47.868 "params": { 00:30:47.868 "name": "Nvme$subsystem", 00:30:47.868 "trtype": "$TEST_TRANSPORT", 00:30:47.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.868 "adrfam": "ipv4", 00:30:47.868 "trsvcid": "$NVMF_PORT", 00:30:47.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.868 "hdgst": ${hdgst:-false}, 00:30:47.868 "ddgst": ${ddgst:-false} 00:30:47.868 }, 00:30:47.868 "method": "bdev_nvme_attach_controller" 00:30:47.868 } 00:30:47.868 EOF 00:30:47.868 )") 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:47.868 08:42:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:47.868 { 00:30:47.868 "params": { 00:30:47.868 "name": "Nvme$subsystem", 00:30:47.868 "trtype": "$TEST_TRANSPORT", 00:30:47.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.868 "adrfam": "ipv4", 00:30:47.868 "trsvcid": "$NVMF_PORT", 00:30:47.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.868 "hdgst": ${hdgst:-false}, 00:30:47.868 "ddgst": ${ddgst:-false} 00:30:47.868 }, 00:30:47.868 "method": "bdev_nvme_attach_controller" 00:30:47.868 } 00:30:47.868 EOF 00:30:47.869 )") 00:30:47.869 08:42:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:47.869 08:42:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:47.869 08:42:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:47.869 08:42:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:47.869 "params": { 00:30:47.869 "name": "Nvme0", 00:30:47.869 "trtype": "tcp", 00:30:47.869 "traddr": "10.0.0.2", 00:30:47.869 "adrfam": "ipv4", 00:30:47.869 "trsvcid": "4420", 00:30:47.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:47.869 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:47.869 "hdgst": false, 00:30:47.869 "ddgst": false 00:30:47.869 }, 00:30:47.869 "method": "bdev_nvme_attach_controller" 00:30:47.869 },{ 00:30:47.869 "params": { 00:30:47.869 "name": "Nvme1", 00:30:47.869 "trtype": "tcp", 00:30:47.869 "traddr": "10.0.0.2", 00:30:47.869 "adrfam": "ipv4", 00:30:47.869 "trsvcid": "4420", 00:30:47.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:47.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:47.869 "hdgst": false, 00:30:47.869 "ddgst": false 00:30:47.869 }, 00:30:47.869 "method": "bdev_nvme_attach_controller" 00:30:47.869 }' 00:30:47.869 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:47.869 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:47.869 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:47.869 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:47.869 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:47.869 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:47.869 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:47.869 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:47.869 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:47.869 08:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:47.869 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:47.869 ... 00:30:47.869 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:47.869 ... 00:30:47.869 fio-3.35 00:30:47.869 Starting 4 threads 00:30:47.869 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.143 00:30:53.143 filename0: (groupid=0, jobs=1): err= 0: pid=476269: Wed May 15 08:42:39 2024 00:30:53.143 read: IOPS=2899, BW=22.7MiB/s (23.8MB/s)(113MiB/5003msec) 00:30:53.143 slat (nsec): min=6309, max=39158, avg=11428.44, stdev=4263.92 00:30:53.143 clat (usec): min=757, max=5345, avg=2719.46, stdev=403.03 00:30:53.143 lat (usec): min=769, max=5351, avg=2730.89, stdev=403.34 00:30:53.143 clat percentiles (usec): 00:30:53.143 | 1.00th=[ 1696], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2442], 00:30:53.143 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2769], 00:30:53.143 | 70.00th=[ 2900], 80.00th=[ 3032], 90.00th=[ 3130], 95.00th=[ 3294], 00:30:53.143 | 99.00th=[ 4015], 99.50th=[ 4490], 99.90th=[ 5080], 99.95th=[ 5211], 00:30:53.143 | 99.99th=[ 5342] 00:30:53.143 bw ( KiB/s): min=21648, max=24560, per=27.17%, avg=23086.22, stdev=1218.72, samples=9 00:30:53.143 iops : min= 2706, max= 3070, avg=2885.78, stdev=152.34, samples=9 00:30:53.143 lat (usec) : 1000=0.11% 00:30:53.143 lat (msec) : 2=1.99%, 4=96.89%, 10=1.01% 00:30:53.143 cpu : usr=93.04%, sys=4.66%, ctx=216, majf=0, minf=0 00:30:53.143 IO depths : 1=0.6%, 2=15.6%, 4=57.2%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.143 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.143 issued rwts: total=14506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.143 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:53.143 filename0: (groupid=0, jobs=1): err= 0: pid=476270: Wed May 15 08:42:39 2024 00:30:53.143 read: IOPS=2490, BW=19.5MiB/s (20.4MB/s)(98.1MiB/5042msec) 00:30:53.143 slat (nsec): min=6312, max=37445, avg=10524.30, stdev=4175.39 00:30:53.143 clat (usec): min=623, max=42676, avg=3169.21, stdev=969.82 00:30:53.143 lat (usec): min=631, max=42683, avg=3179.73, stdev=969.38 00:30:53.143 clat percentiles (usec): 00:30:53.143 | 1.00th=[ 1975], 5.00th=[ 2442], 10.00th=[ 2606], 20.00th=[ 2802], 00:30:53.143 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3097], 00:30:53.143 | 70.00th=[ 3195], 80.00th=[ 3458], 90.00th=[ 4113], 95.00th=[ 4424], 00:30:53.143 | 99.00th=[ 5014], 99.50th=[ 5145], 99.90th=[ 5473], 99.95th=[ 5538], 00:30:53.143 | 99.99th=[42730] 00:30:53.143 bw ( KiB/s): min=17824, max=21312, per=23.70%, avg=20141.56, stdev=1143.31, samples=9 00:30:53.143 iops : min= 2228, max= 2664, avg=2517.67, stdev=142.89, samples=9 00:30:53.143 lat (usec) : 750=0.05%, 1000=0.06% 00:30:53.143 lat (msec) : 2=0.96%, 4=87.79%, 10=11.09%, 50=0.04% 00:30:53.143 cpu : usr=93.73%, sys=4.05%, ctx=259, majf=0, minf=9 00:30:53.143 IO depths : 1=0.4%, 2=6.1%, 4=66.5%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.143 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.143 issued rwts: total=12558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.143 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:53.143 filename1: (groupid=0, jobs=1): err= 0: pid=476271: Wed May 15 08:42:39 2024 00:30:53.143 read: IOPS=2860, BW=22.3MiB/s (23.4MB/s)(112MiB/5002msec) 00:30:53.143 slat (nsec): min=6356, max=39862, avg=11209.29, stdev=3825.33 00:30:53.143 clat (usec): min=606, max=5390, avg=2759.91, stdev=388.35 00:30:53.143 lat (usec): min=618, max=5399, avg=2771.12, stdev=388.68 00:30:53.143 clat percentiles (usec): 00:30:53.143 | 1.00th=[ 1876], 5.00th=[ 2180], 10.00th=[ 2343], 20.00th=[ 2474], 00:30:53.143 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2737], 60.00th=[ 2835], 00:30:53.143 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3163], 95.00th=[ 3359], 00:30:53.143 | 99.00th=[ 3949], 99.50th=[ 4293], 99.90th=[ 4752], 99.95th=[ 5211], 00:30:53.143 | 99.99th=[ 5342] 00:30:53.143 bw ( KiB/s): min=21504, max=24272, per=26.92%, avg=22880.00, stdev=1081.24, samples=9 00:30:53.143 iops : min= 2688, max= 3034, avg=2860.00, stdev=135.16, samples=9 00:30:53.143 lat (usec) : 750=0.01%, 1000=0.05% 00:30:53.143 lat (msec) : 2=1.49%, 4=97.53%, 10=0.92% 00:30:53.143 cpu : usr=97.20%, sys=2.42%, ctx=15, majf=0, minf=9 00:30:53.143 IO depths : 1=0.3%, 2=13.9%, 4=57.3%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.143 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.143 issued rwts: total=14306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.143 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:53.143 filename1: (groupid=0, jobs=1): err= 0: pid=476273: Wed May 15 08:42:39 2024 00:30:53.143 read: IOPS=2437, BW=19.0MiB/s (20.0MB/s)(95.2MiB/5001msec) 00:30:53.143 slat (nsec): min=6350, max=42155, avg=10181.57, stdev=3897.53 00:30:53.143 clat (usec): min=553, max=5838, avg=3253.40, stdev=615.36 00:30:53.143 lat (usec): min=564, max=5861, avg=3263.58, stdev=614.70 00:30:53.143 clat percentiles (usec): 00:30:53.143 | 1.00th=[ 2147], 5.00th=[ 2540], 10.00th=[ 2737], 20.00th=[ 2868], 00:30:53.143 | 30.00th=[ 2966], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3163], 00:30:53.143 | 70.00th=[ 3294], 80.00th=[ 3589], 90.00th=[ 4293], 95.00th=[ 4490], 00:30:53.143 | 99.00th=[ 5211], 99.50th=[ 5407], 99.90th=[ 5735], 99.95th=[ 5800], 00:30:53.143 | 99.99th=[ 5800] 00:30:53.143 bw ( KiB/s): min=17632, max=20736, per=22.87%, avg=19431.11, stdev=925.97, samples=9 00:30:53.143 iops : min= 2204, max= 2592, avg=2428.89, stdev=115.75, samples=9 00:30:53.143 lat (usec) : 750=0.02% 00:30:53.143 lat (msec) : 2=0.53%, 4=85.81%, 10=13.65% 00:30:53.143 cpu : usr=97.26%, sys=2.42%, ctx=10, majf=0, minf=9 00:30:53.143 IO depths : 1=0.4%, 2=3.2%, 4=68.5%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.143 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.143 issued rwts: total=12188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.143 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:53.143 00:30:53.143 Run status group 0 (all jobs): 00:30:53.143 READ: bw=83.0MiB/s (87.0MB/s), 19.0MiB/s-22.7MiB/s (20.0MB/s-23.8MB/s), io=418MiB (439MB), run=5001-5042msec 00:30:53.143 08:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:53.143 08:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:53.143 08:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:53.143 08:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:53.143 08:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:53.143 08:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:53.143 08:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.143 08:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.143 08:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.143 08:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:53.143 08:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.144 08:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.144 08:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.144 08:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:53.144 08:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:53.144 08:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:53.144 08:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:53.144 08:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.144 08:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.144 08:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.144 08:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:53.144 08:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.144 08:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.144 08:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.144 00:30:53.144 real 0m24.370s 00:30:53.144 user 4m52.021s 00:30:53.144 sys 0m4.298s 00:30:53.144 08:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:53.144 08:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.144 ************************************ 00:30:53.144 END TEST fio_dif_rand_params 00:30:53.144 ************************************ 00:30:53.144 08:42:40 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:53.144 08:42:40 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:53.144 08:42:40 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:53.144 08:42:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:53.144 ************************************ 00:30:53.144 START TEST fio_dif_digest 00:30:53.144 ************************************ 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:53.144 bdev_null0 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.144 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:53.144 [2024-05-15 08:42:40.162966] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.426 { 00:30:53.426 "params": { 00:30:53.426 "name": "Nvme$subsystem", 00:30:53.426 "trtype": "$TEST_TRANSPORT", 00:30:53.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.426 "adrfam": "ipv4", 00:30:53.426 "trsvcid": "$NVMF_PORT", 00:30:53.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.426 "hdgst": ${hdgst:-false}, 00:30:53.426 "ddgst": ${ddgst:-false} 00:30:53.426 }, 00:30:53.426 "method": "bdev_nvme_attach_controller" 00:30:53.426 } 00:30:53.426 EOF 00:30:53.426 )") 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:53.426 "params": { 00:30:53.426 "name": "Nvme0", 00:30:53.426 "trtype": "tcp", 00:30:53.426 "traddr": "10.0.0.2", 00:30:53.426 "adrfam": "ipv4", 00:30:53.426 "trsvcid": "4420", 00:30:53.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:53.426 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:53.426 "hdgst": true, 00:30:53.426 "ddgst": true 00:30:53.426 }, 00:30:53.426 "method": "bdev_nvme_attach_controller" 00:30:53.426 }' 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:53.426 08:42:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.692 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:53.692 ... 00:30:53.692 fio-3.35 00:30:53.692 Starting 3 threads 00:30:53.692 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.909 00:31:05.909 filename0: (groupid=0, jobs=1): err= 0: pid=477420: Wed May 15 08:42:51 2024 00:31:05.909 read: IOPS=290, BW=36.4MiB/s (38.1MB/s)(365MiB/10046msec) 00:31:05.909 slat (nsec): min=6710, max=35000, avg=12013.26, stdev=1815.02 00:31:05.909 clat (usec): min=5248, max=52646, avg=10285.77, stdev=1257.29 00:31:05.909 lat (usec): min=5258, max=52658, avg=10297.78, stdev=1257.25 00:31:05.909 clat percentiles (usec): 00:31:05.909 | 1.00th=[ 8717], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:31:05.909 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:31:05.909 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11076], 95.00th=[11469], 00:31:05.909 | 99.00th=[11863], 99.50th=[12125], 99.90th=[12387], 99.95th=[47449], 00:31:05.909 | 99.99th=[52691] 00:31:05.909 bw ( KiB/s): min=36096, max=38144, per=35.33%, avg=37376.00, stdev=587.30, samples=20 00:31:05.909 iops : min= 282, max= 298, avg=292.00, stdev= 4.59, samples=20 00:31:05.909 lat (msec) : 10=34.60%, 20=65.33%, 50=0.03%, 100=0.03% 00:31:05.909 cpu : usr=94.51%, sys=5.20%, ctx=33, majf=0, minf=102 00:31:05.909 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.909 issued rwts: total=2922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.909 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:05.909 filename0: (groupid=0, jobs=1): err= 0: pid=477421: Wed May 15 08:42:51 2024 00:31:05.909 read: IOPS=272, BW=34.1MiB/s (35.7MB/s)(341MiB/10003msec) 00:31:05.909 slat (nsec): min=6651, max=26515, avg=12017.71, stdev=1722.22 00:31:05.909 clat (usec): min=4550, max=14406, avg=10989.92, stdev=761.80 00:31:05.909 lat (usec): min=4559, max=14432, avg=11001.94, stdev=761.79 00:31:05.909 clat percentiles (usec): 00:31:05.909 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:31:05.909 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:31:05.909 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:31:05.909 | 99.00th=[12780], 99.50th=[13173], 99.90th=[14353], 99.95th=[14353], 00:31:05.909 | 99.99th=[14353] 00:31:05.909 bw ( KiB/s): min=33792, max=35840, per=32.97%, avg=34883.37, stdev=565.02, samples=19 00:31:05.909 iops : min= 264, max= 280, avg=272.53, stdev= 4.41, samples=19 00:31:05.909 lat (msec) : 10=8.32%, 20=91.68% 00:31:05.909 cpu : usr=95.11%, sys=4.59%, ctx=20, majf=0, minf=121 00:31:05.909 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.909 issued rwts: total=2727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.909 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:05.909 filename0: (groupid=0, jobs=1): err= 0: pid=477422: Wed May 15 08:42:51 2024 00:31:05.909 read: IOPS=264, BW=33.0MiB/s (34.6MB/s)(332MiB/10044msec) 00:31:05.909 slat (nsec): min=6583, max=25743, avg=12105.28, stdev=1665.93 00:31:05.909 clat (usec): min=8873, max=50907, avg=11323.83, stdev=1222.04 00:31:05.909 lat (usec): min=8886, max=50918, avg=11335.93, stdev=1222.03 00:31:05.909 clat percentiles (usec): 00:31:05.909 | 1.00th=[ 9765], 5.00th=[10159], 10.00th=[10421], 20.00th=[10683], 00:31:05.909 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:31:05.909 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:31:05.909 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13698], 99.95th=[44303], 00:31:05.909 | 99.99th=[51119] 00:31:05.909 bw ( KiB/s): min=33536, max=34560, per=32.09%, avg=33945.60, stdev=315.18, samples=20 00:31:05.909 iops : min= 262, max= 270, avg=265.20, stdev= 2.46, samples=20 00:31:05.909 lat (msec) : 10=2.60%, 20=97.32%, 50=0.04%, 100=0.04% 00:31:05.909 cpu : usr=95.11%, sys=4.59%, ctx=19, majf=0, minf=151 00:31:05.909 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.909 issued rwts: total=2654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.909 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:05.909 00:31:05.909 Run status group 0 (all jobs): 00:31:05.909 READ: bw=103MiB/s (108MB/s), 33.0MiB/s-36.4MiB/s (34.6MB/s-38.1MB/s), io=1038MiB (1088MB), run=10003-10046msec 00:31:05.909 08:42:51 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:05.909 08:42:51 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:05.909 08:42:51 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:05.909 08:42:51 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:05.909 08:42:51 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:05.909 08:42:51 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:05.909 08:42:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.909 08:42:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:05.909 08:42:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.909 08:42:51 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:05.909 08:42:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.909 08:42:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:05.909 08:42:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.909 00:31:05.909 real 0m11.201s 00:31:05.909 user 0m35.153s 00:31:05.909 sys 0m1.735s 00:31:05.909 08:42:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:05.909 08:42:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:05.909 ************************************ 00:31:05.909 END TEST fio_dif_digest 00:31:05.909 ************************************ 00:31:05.909 08:42:51 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:05.909 08:42:51 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:05.909 08:42:51 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:05.909 08:42:51 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:05.909 08:42:51 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:05.909 08:42:51 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:05.909 08:42:51 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:05.909 08:42:51 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:05.909 rmmod nvme_tcp 00:31:05.909 rmmod nvme_fabrics 00:31:05.909 rmmod nvme_keyring 00:31:05.909 08:42:51 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:05.909 08:42:51 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:05.909 08:42:51 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:05.909 08:42:51 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 468283 ']' 00:31:05.909 08:42:51 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 468283 00:31:05.909 08:42:51 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 468283 ']' 00:31:05.909 08:42:51 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 468283 00:31:05.909 08:42:51 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:31:05.909 08:42:51 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:05.909 08:42:51 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 468283 00:31:05.909 08:42:51 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:05.909 08:42:51 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:05.909 08:42:51 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 468283' 00:31:05.909 killing process with pid 468283 00:31:05.909 08:42:51 nvmf_dif -- common/autotest_common.sh@965 -- # kill 468283 00:31:05.909 [2024-05-15 08:42:51.458696] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:05.909 08:42:51 nvmf_dif -- common/autotest_common.sh@970 -- # wait 468283 00:31:05.909 08:42:51 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:05.909 08:42:51 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:06.847 Waiting for block devices as requested 00:31:06.847 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:06.847 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:07.105 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:07.105 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:07.105 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:07.105 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:07.364 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:07.364 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:07.364 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:07.364 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:07.624 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:07.624 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:07.624 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:07.884 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:07.884 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:07.884 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:07.884 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:08.143 08:42:55 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:08.143 08:42:55 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:08.143 08:42:55 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:08.143 08:42:55 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:08.143 08:42:55 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.143 08:42:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:08.143 08:42:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.048 08:42:57 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:10.307 00:31:10.307 real 1m13.147s 00:31:10.307 user 7m10.081s 00:31:10.307 sys 0m18.121s 00:31:10.307 08:42:57 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:10.307 08:42:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:10.307 ************************************ 00:31:10.307 END TEST nvmf_dif 00:31:10.307 ************************************ 00:31:10.307 08:42:57 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:10.307 08:42:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:10.307 08:42:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:10.307 08:42:57 -- common/autotest_common.sh@10 -- # set +x 00:31:10.307 ************************************ 00:31:10.307 START TEST nvmf_abort_qd_sizes 00:31:10.307 ************************************ 00:31:10.307 08:42:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:10.307 * Looking for test storage... 00:31:10.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:10.307 08:42:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:10.307 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:10.307 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:10.307 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:10.307 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:10.307 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:10.307 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:10.307 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:10.308 08:42:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:15.584 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:15.584 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:15.585 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:15.585 Found net devices under 0000:86:00.0: cvl_0_0 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:15.585 Found net devices under 0000:86:00.1: cvl_0_1 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:15.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:15.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:31:15.585 00:31:15.585 --- 10.0.0.2 ping statistics --- 00:31:15.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.585 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:15.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:15.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:31:15.585 00:31:15.585 --- 10.0.0.1 ping statistics --- 00:31:15.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.585 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:15.585 08:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:18.875 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:18.875 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:18.875 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:18.875 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:18.875 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:18.875 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:18.875 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:18.875 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:18.875 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:18.875 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:18.875 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:18.875 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:18.875 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:18.875 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:18.875 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:18.875 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:19.444 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=485185 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 485185 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 485185 ']' 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:19.444 08:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:19.444 [2024-05-15 08:43:06.380865] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:31:19.444 [2024-05-15 08:43:06.380912] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:19.444 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.444 [2024-05-15 08:43:06.441017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:19.704 [2024-05-15 08:43:06.523429] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:19.704 [2024-05-15 08:43:06.523474] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:19.704 [2024-05-15 08:43:06.523482] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:19.704 [2024-05-15 08:43:06.523488] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:19.704 [2024-05-15 08:43:06.523493] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:19.704 [2024-05-15 08:43:06.523710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.704 [2024-05-15 08:43:06.523726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:19.704 [2024-05-15 08:43:06.523824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:19.704 [2024-05-15 08:43:06.523825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:20.274 08:43:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:20.274 ************************************ 00:31:20.274 START TEST spdk_target_abort 00:31:20.274 ************************************ 00:31:20.274 08:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:31:20.274 08:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:20.274 08:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:31:20.274 08:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.274 08:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:23.562 spdk_targetn1 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:23.562 [2024-05-15 08:43:10.092802] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:23.562 [2024-05-15 08:43:10.125581] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:23.562 [2024-05-15 08:43:10.125819] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:23.562 08:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:23.562 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.850 Initializing NVMe Controllers 00:31:26.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:26.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:26.850 Initialization complete. Launching workers. 00:31:26.850 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15626, failed: 0 00:31:26.850 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1366, failed to submit 14260 00:31:26.850 success 726, unsuccess 640, failed 0 00:31:26.850 08:43:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:26.850 08:43:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:26.850 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.138 Initializing NVMe Controllers 00:31:30.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:30.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:30.138 Initialization complete. Launching workers. 00:31:30.138 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8496, failed: 0 00:31:30.138 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1258, failed to submit 7238 00:31:30.138 success 336, unsuccess 922, failed 0 00:31:30.138 08:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:30.138 08:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:30.138 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.425 Initializing NVMe Controllers 00:31:33.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:33.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:33.425 Initialization complete. Launching workers. 00:31:33.425 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37913, failed: 0 00:31:33.425 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2851, failed to submit 35062 00:31:33.425 success 609, unsuccess 2242, failed 0 00:31:33.425 08:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:33.425 08:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.425 08:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:33.425 08:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.425 08:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:33.425 08:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.425 08:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:34.360 08:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.360 08:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 485185 00:31:34.360 08:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 485185 ']' 00:31:34.360 08:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 485185 00:31:34.360 08:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:31:34.360 08:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:34.360 08:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 485185 00:31:34.360 08:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:34.360 08:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:34.360 08:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 485185' 00:31:34.360 killing process with pid 485185 00:31:34.360 08:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 485185 00:31:34.360 [2024-05-15 08:43:21.193785] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:34.360 08:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 485185 00:31:34.619 00:31:34.619 real 0m14.139s 00:31:34.619 user 0m56.291s 00:31:34.619 sys 0m2.259s 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:34.619 ************************************ 00:31:34.619 END TEST spdk_target_abort 00:31:34.619 ************************************ 00:31:34.619 08:43:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:34.619 08:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:34.619 08:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:34.619 08:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:34.619 ************************************ 00:31:34.619 START TEST kernel_target_abort 00:31:34.619 ************************************ 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@728 -- # local ip 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # ip_candidates=() 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # local -A ip_candidates 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:34.619 08:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:37.154 Waiting for block devices as requested 00:31:37.154 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:37.413 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:37.413 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:37.413 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:37.413 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:37.672 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:37.672 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:37.672 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:37.672 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:37.672 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:37.931 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:37.931 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:37.931 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:38.190 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:38.190 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:38.190 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:38.190 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:38.450 No valid GPT data, bailing 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:31:38.450 00:31:38.450 Discovery Log Number of Records 2, Generation counter 2 00:31:38.450 =====Discovery Log Entry 0====== 00:31:38.450 trtype: tcp 00:31:38.450 adrfam: ipv4 00:31:38.450 subtype: current discovery subsystem 00:31:38.450 treq: not specified, sq flow control disable supported 00:31:38.450 portid: 1 00:31:38.450 trsvcid: 4420 00:31:38.450 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:38.450 traddr: 10.0.0.1 00:31:38.450 eflags: none 00:31:38.450 sectype: none 00:31:38.450 =====Discovery Log Entry 1====== 00:31:38.450 trtype: tcp 00:31:38.450 adrfam: ipv4 00:31:38.450 subtype: nvme subsystem 00:31:38.450 treq: not specified, sq flow control disable supported 00:31:38.450 portid: 1 00:31:38.450 trsvcid: 4420 00:31:38.450 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:38.450 traddr: 10.0.0.1 00:31:38.450 eflags: none 00:31:38.450 sectype: none 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:38.450 08:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:38.709 EAL: No free 2048 kB hugepages reported on node 1 00:31:41.998 Initializing NVMe Controllers 00:31:41.998 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:41.998 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:41.998 Initialization complete. Launching workers. 00:31:41.998 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93830, failed: 0 00:31:41.998 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 93830, failed to submit 0 00:31:41.998 success 0, unsuccess 93830, failed 0 00:31:41.998 08:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:41.999 08:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:41.999 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.294 Initializing NVMe Controllers 00:31:45.294 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:45.294 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:45.294 Initialization complete. Launching workers. 00:31:45.294 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 147738, failed: 0 00:31:45.294 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36626, failed to submit 111112 00:31:45.294 success 0, unsuccess 36626, failed 0 00:31:45.294 08:43:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:45.294 08:43:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:45.294 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.828 Initializing NVMe Controllers 00:31:47.828 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:47.828 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:47.828 Initialization complete. Launching workers. 00:31:47.828 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 141039, failed: 0 00:31:47.828 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35310, failed to submit 105729 00:31:47.828 success 0, unsuccess 35310, failed 0 00:31:47.828 08:43:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:47.828 08:43:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:47.828 08:43:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:47.828 08:43:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:47.828 08:43:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:47.828 08:43:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:47.828 08:43:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:47.828 08:43:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:47.828 08:43:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:47.828 08:43:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:50.363 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:50.363 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:50.363 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:50.363 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:50.363 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:50.363 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:50.363 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:50.363 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:50.363 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:50.363 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:50.363 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:50.363 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:50.363 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:50.363 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:50.622 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:50.622 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:51.191 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:31:51.450 00:31:51.450 real 0m16.828s 00:31:51.450 user 0m8.833s 00:31:51.450 sys 0m4.642s 00:31:51.450 08:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:51.450 08:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:51.450 ************************************ 00:31:51.450 END TEST kernel_target_abort 00:31:51.450 ************************************ 00:31:51.450 08:43:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:51.450 08:43:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:51.450 08:43:38 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:51.450 08:43:38 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:31:51.450 08:43:38 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:51.450 08:43:38 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:31:51.450 08:43:38 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:51.450 08:43:38 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:51.450 rmmod nvme_tcp 00:31:51.450 rmmod nvme_fabrics 00:31:51.450 rmmod nvme_keyring 00:31:51.450 08:43:38 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:51.450 08:43:38 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:31:51.450 08:43:38 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:31:51.450 08:43:38 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 485185 ']' 00:31:51.450 08:43:38 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 485185 00:31:51.450 08:43:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 485185 ']' 00:31:51.450 08:43:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 485185 00:31:51.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (485185) - No such process 00:31:51.450 08:43:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 485185 is not found' 00:31:51.450 Process with pid 485185 is not found 00:31:51.450 08:43:38 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:51.450 08:43:38 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:53.987 Waiting for block devices as requested 00:31:53.987 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:53.987 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:53.987 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:53.987 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:53.987 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:53.987 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:54.246 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:54.246 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:54.246 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:54.246 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:54.504 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:54.504 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:54.504 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:54.763 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:54.763 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:54.763 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:54.763 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:55.023 08:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:55.023 08:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:55.023 08:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:55.023 08:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:55.023 08:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.023 08:43:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:55.023 08:43:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:56.928 08:43:43 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:56.928 00:31:56.928 real 0m46.745s 00:31:56.928 user 1m8.992s 00:31:56.928 sys 0m14.753s 00:31:56.928 08:43:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:56.928 08:43:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:56.928 ************************************ 00:31:56.928 END TEST nvmf_abort_qd_sizes 00:31:56.928 ************************************ 00:31:56.928 08:43:43 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:56.928 08:43:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:56.928 08:43:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:56.928 08:43:43 -- common/autotest_common.sh@10 -- # set +x 00:31:57.187 ************************************ 00:31:57.187 START TEST keyring_file 00:31:57.187 ************************************ 00:31:57.187 08:43:43 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:57.187 * Looking for test storage... 00:31:57.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:57.187 08:43:44 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:57.187 08:43:44 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.187 08:43:44 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.187 08:43:44 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.187 08:43:44 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.187 08:43:44 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.187 08:43:44 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.187 08:43:44 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:57.187 08:43:44 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:57.187 08:43:44 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:57.187 08:43:44 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:57.187 08:43:44 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:57.187 08:43:44 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:57.187 08:43:44 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:57.187 08:43:44 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gbs1Xq1Tmr 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gbs1Xq1Tmr 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gbs1Xq1Tmr 00:31:57.187 08:43:44 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.gbs1Xq1Tmr 00:31:57.187 08:43:44 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.M0QdxvUoLq 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:57.187 08:43:44 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.M0QdxvUoLq 00:31:57.187 08:43:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.M0QdxvUoLq 00:31:57.187 08:43:44 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.M0QdxvUoLq 00:31:57.187 08:43:44 keyring_file -- keyring/file.sh@30 -- # tgtpid=493920 00:31:57.187 08:43:44 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:57.187 08:43:44 keyring_file -- keyring/file.sh@32 -- # waitforlisten 493920 00:31:57.187 08:43:44 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 493920 ']' 00:31:57.187 08:43:44 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.187 08:43:44 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:57.187 08:43:44 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:57.187 08:43:44 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:57.187 08:43:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:57.457 [2024-05-15 08:43:44.243155] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:31:57.457 [2024-05-15 08:43:44.243207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493920 ] 00:31:57.457 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.457 [2024-05-15 08:43:44.296320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.457 [2024-05-15 08:43:44.377794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:31:58.106 08:43:45 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:58.106 [2024-05-15 08:43:45.045347] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:58.106 null0 00:31:58.106 [2024-05-15 08:43:45.077394] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:58.106 [2024-05-15 08:43:45.077436] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:58.106 [2024-05-15 08:43:45.077675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:58.106 [2024-05-15 08:43:45.085426] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.106 08:43:45 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:58.106 [2024-05-15 08:43:45.097457] nvmf_rpc.c: 768:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:58.106 request: 00:31:58.106 { 00:31:58.106 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:58.106 "secure_channel": false, 00:31:58.106 "listen_address": { 00:31:58.106 "trtype": "tcp", 00:31:58.106 "traddr": "127.0.0.1", 00:31:58.106 "trsvcid": "4420" 00:31:58.106 }, 00:31:58.106 "method": "nvmf_subsystem_add_listener", 00:31:58.106 "req_id": 1 00:31:58.106 } 00:31:58.106 Got JSON-RPC error response 00:31:58.106 response: 00:31:58.106 { 00:31:58.106 "code": -32602, 00:31:58.106 "message": "Invalid parameters" 00:31:58.106 } 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:58.106 08:43:45 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:58.106 08:43:45 keyring_file -- keyring/file.sh@46 -- # bperfpid=493973 00:31:58.106 08:43:45 keyring_file -- keyring/file.sh@48 -- # waitforlisten 493973 /var/tmp/bperf.sock 00:31:58.106 08:43:45 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:58.107 08:43:45 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 493973 ']' 00:31:58.107 08:43:45 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:58.107 08:43:45 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:58.107 08:43:45 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:58.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:58.107 08:43:45 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:58.107 08:43:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:58.383 [2024-05-15 08:43:45.148225] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:31:58.383 [2024-05-15 08:43:45.148269] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493973 ] 00:31:58.383 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.383 [2024-05-15 08:43:45.199942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.383 [2024-05-15 08:43:45.271557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.034 08:43:45 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:59.034 08:43:45 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:31:59.034 08:43:45 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gbs1Xq1Tmr 00:31:59.034 08:43:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gbs1Xq1Tmr 00:31:59.307 08:43:46 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.M0QdxvUoLq 00:31:59.307 08:43:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.M0QdxvUoLq 00:31:59.307 08:43:46 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:31:59.307 08:43:46 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:31:59.307 08:43:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:59.307 08:43:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:59.307 08:43:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:59.596 08:43:46 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.gbs1Xq1Tmr == \/\t\m\p\/\t\m\p\.\g\b\s\1\X\q\1\T\m\r ]] 00:31:59.596 08:43:46 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:31:59.596 08:43:46 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:59.596 08:43:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:59.596 08:43:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:59.596 08:43:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:59.913 08:43:46 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.M0QdxvUoLq == \/\t\m\p\/\t\m\p\.\M\0\Q\d\x\v\U\o\L\q ]] 00:31:59.913 08:43:46 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:31:59.913 08:43:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:59.913 08:43:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:59.913 08:43:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:59.913 08:43:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:59.913 08:43:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:59.913 08:43:46 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:59.913 08:43:46 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:31:59.913 08:43:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:59.913 08:43:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:59.913 08:43:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:59.913 08:43:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:59.913 08:43:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.198 08:43:47 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:00.198 08:43:47 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:00.198 08:43:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:00.198 [2024-05-15 08:43:47.177556] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:00.516 nvme0n1 00:32:00.516 08:43:47 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:00.516 08:43:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:00.516 08:43:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.516 08:43:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.516 08:43:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:00.516 08:43:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.516 08:43:47 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:00.516 08:43:47 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:00.516 08:43:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:00.516 08:43:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.516 08:43:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.516 08:43:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:00.516 08:43:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.786 08:43:47 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:00.786 08:43:47 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:00.786 Running I/O for 1 seconds... 00:32:01.722 00:32:01.722 Latency(us) 00:32:01.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.722 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:01.722 nvme0n1 : 1.00 18517.13 72.33 0.00 0.00 6897.73 3504.75 14019.01 00:32:01.722 =================================================================================================================== 00:32:01.722 Total : 18517.13 72.33 0.00 0.00 6897.73 3504.75 14019.01 00:32:01.722 0 00:32:01.722 08:43:48 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:01.722 08:43:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:01.979 08:43:48 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:01.979 08:43:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:01.979 08:43:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:01.979 08:43:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:01.979 08:43:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:01.979 08:43:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:02.237 08:43:49 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:02.237 08:43:49 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:02.237 08:43:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:02.237 08:43:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:02.237 08:43:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:02.237 08:43:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:02.237 08:43:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:02.495 08:43:49 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:02.495 08:43:49 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:02.495 08:43:49 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:02.495 08:43:49 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:02.495 08:43:49 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:02.495 08:43:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:02.495 08:43:49 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:02.495 08:43:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:02.495 08:43:49 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:02.496 08:43:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:02.496 [2024-05-15 08:43:49.440873] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:02.496 [2024-05-15 08:43:49.441681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2590d50 (107): Transport endpoint is not connected 00:32:02.496 [2024-05-15 08:43:49.442675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2590d50 (9): Bad file descriptor 00:32:02.496 [2024-05-15 08:43:49.443677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:02.496 [2024-05-15 08:43:49.443685] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:02.496 [2024-05-15 08:43:49.443692] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:02.496 request: 00:32:02.496 { 00:32:02.496 "name": "nvme0", 00:32:02.496 "trtype": "tcp", 00:32:02.496 "traddr": "127.0.0.1", 00:32:02.496 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:02.496 "adrfam": "ipv4", 00:32:02.496 "trsvcid": "4420", 00:32:02.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:02.496 "psk": "key1", 00:32:02.496 "method": "bdev_nvme_attach_controller", 00:32:02.496 "req_id": 1 00:32:02.496 } 00:32:02.496 Got JSON-RPC error response 00:32:02.496 response: 00:32:02.496 { 00:32:02.496 "code": -32602, 00:32:02.496 "message": "Invalid parameters" 00:32:02.496 } 00:32:02.496 08:43:49 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:02.496 08:43:49 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:02.496 08:43:49 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:02.496 08:43:49 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:02.496 08:43:49 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:02.496 08:43:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:02.496 08:43:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:02.496 08:43:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:02.496 08:43:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:02.496 08:43:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:02.754 08:43:49 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:02.754 08:43:49 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:02.754 08:43:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:02.754 08:43:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:02.754 08:43:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:02.754 08:43:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:02.754 08:43:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:03.012 08:43:49 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:03.012 08:43:49 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:03.012 08:43:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:03.012 08:43:49 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:03.012 08:43:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:03.271 08:43:50 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:03.271 08:43:50 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:03.271 08:43:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:03.529 08:43:50 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:03.529 08:43:50 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.gbs1Xq1Tmr 00:32:03.529 08:43:50 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.gbs1Xq1Tmr 00:32:03.529 08:43:50 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:03.529 08:43:50 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.gbs1Xq1Tmr 00:32:03.529 08:43:50 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:03.529 08:43:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:03.529 08:43:50 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:03.529 08:43:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:03.529 08:43:50 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gbs1Xq1Tmr 00:32:03.530 08:43:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gbs1Xq1Tmr 00:32:03.530 [2024-05-15 08:43:50.498832] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gbs1Xq1Tmr': 0100660 00:32:03.530 [2024-05-15 08:43:50.498860] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:03.530 request: 00:32:03.530 { 00:32:03.530 "name": "key0", 00:32:03.530 "path": "/tmp/tmp.gbs1Xq1Tmr", 00:32:03.530 "method": "keyring_file_add_key", 00:32:03.530 "req_id": 1 00:32:03.530 } 00:32:03.530 Got JSON-RPC error response 00:32:03.530 response: 00:32:03.530 { 00:32:03.530 "code": -1, 00:32:03.530 "message": "Operation not permitted" 00:32:03.530 } 00:32:03.530 08:43:50 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:03.530 08:43:50 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:03.530 08:43:50 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:03.530 08:43:50 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:03.530 08:43:50 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.gbs1Xq1Tmr 00:32:03.530 08:43:50 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gbs1Xq1Tmr 00:32:03.530 08:43:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gbs1Xq1Tmr 00:32:03.788 08:43:50 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.gbs1Xq1Tmr 00:32:03.788 08:43:50 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:03.788 08:43:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:03.788 08:43:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:03.788 08:43:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:03.788 08:43:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:03.788 08:43:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:04.047 08:43:50 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:04.047 08:43:50 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.047 08:43:50 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:04.047 08:43:50 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.047 08:43:50 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:04.047 08:43:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:04.047 08:43:50 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:04.047 08:43:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:04.047 08:43:50 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.047 08:43:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.047 [2024-05-15 08:43:51.036250] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.gbs1Xq1Tmr': No such file or directory 00:32:04.047 [2024-05-15 08:43:51.036268] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:04.047 [2024-05-15 08:43:51.036288] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:04.047 [2024-05-15 08:43:51.036311] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:04.047 [2024-05-15 08:43:51.036317] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:04.047 request: 00:32:04.047 { 00:32:04.047 "name": "nvme0", 00:32:04.047 "trtype": "tcp", 00:32:04.047 "traddr": "127.0.0.1", 00:32:04.047 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:04.047 "adrfam": "ipv4", 00:32:04.047 "trsvcid": "4420", 00:32:04.047 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:04.047 "psk": "key0", 00:32:04.047 "method": "bdev_nvme_attach_controller", 00:32:04.047 "req_id": 1 00:32:04.047 } 00:32:04.047 Got JSON-RPC error response 00:32:04.047 response: 00:32:04.047 { 00:32:04.047 "code": -19, 00:32:04.047 "message": "No such device" 00:32:04.047 } 00:32:04.047 08:43:51 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:04.047 08:43:51 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:04.047 08:43:51 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:04.047 08:43:51 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:04.047 08:43:51 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:04.047 08:43:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:04.305 08:43:51 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:04.306 08:43:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:04.306 08:43:51 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:04.306 08:43:51 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:04.306 08:43:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:04.306 08:43:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:04.306 08:43:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.u5xzEJooA1 00:32:04.306 08:43:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:04.306 08:43:51 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:04.306 08:43:51 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.306 08:43:51 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:04.306 08:43:51 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:04.306 08:43:51 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:04.306 08:43:51 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:04.306 08:43:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.u5xzEJooA1 00:32:04.306 08:43:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.u5xzEJooA1 00:32:04.306 08:43:51 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.u5xzEJooA1 00:32:04.306 08:43:51 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.u5xzEJooA1 00:32:04.306 08:43:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.u5xzEJooA1 00:32:04.564 08:43:51 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.564 08:43:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.822 nvme0n1 00:32:04.823 08:43:51 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:04.823 08:43:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:04.823 08:43:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:04.823 08:43:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:04.823 08:43:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:04.823 08:43:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.082 08:43:51 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:05.082 08:43:51 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:05.082 08:43:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:05.082 08:43:52 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:05.082 08:43:52 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:05.082 08:43:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:05.082 08:43:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:05.082 08:43:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.341 08:43:52 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:05.341 08:43:52 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:05.341 08:43:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:05.341 08:43:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:05.341 08:43:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:05.341 08:43:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:05.341 08:43:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.600 08:43:52 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:05.600 08:43:52 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:05.600 08:43:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:05.600 08:43:52 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:05.600 08:43:52 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:05.600 08:43:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.858 08:43:52 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:05.858 08:43:52 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.u5xzEJooA1 00:32:05.859 08:43:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.u5xzEJooA1 00:32:06.117 08:43:52 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.M0QdxvUoLq 00:32:06.117 08:43:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.M0QdxvUoLq 00:32:06.117 08:43:53 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:06.117 08:43:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:06.375 nvme0n1 00:32:06.375 08:43:53 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:06.375 08:43:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:06.633 08:43:53 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:06.633 "subsystems": [ 00:32:06.633 { 00:32:06.633 "subsystem": "keyring", 00:32:06.633 "config": [ 00:32:06.633 { 00:32:06.633 "method": "keyring_file_add_key", 00:32:06.633 "params": { 00:32:06.633 "name": "key0", 00:32:06.633 "path": "/tmp/tmp.u5xzEJooA1" 00:32:06.633 } 00:32:06.633 }, 00:32:06.633 { 00:32:06.633 "method": "keyring_file_add_key", 00:32:06.633 "params": { 00:32:06.633 "name": "key1", 00:32:06.633 "path": "/tmp/tmp.M0QdxvUoLq" 00:32:06.633 } 00:32:06.633 } 00:32:06.633 ] 00:32:06.633 }, 00:32:06.633 { 00:32:06.633 "subsystem": "iobuf", 00:32:06.633 "config": [ 00:32:06.633 { 00:32:06.633 "method": "iobuf_set_options", 00:32:06.633 "params": { 00:32:06.633 "small_pool_count": 8192, 00:32:06.633 "large_pool_count": 1024, 00:32:06.634 "small_bufsize": 8192, 00:32:06.634 "large_bufsize": 135168 00:32:06.634 } 00:32:06.634 } 00:32:06.634 ] 00:32:06.634 }, 00:32:06.634 { 00:32:06.634 "subsystem": "sock", 00:32:06.634 "config": [ 00:32:06.634 { 00:32:06.634 "method": "sock_impl_set_options", 00:32:06.634 "params": { 00:32:06.634 "impl_name": "posix", 00:32:06.634 "recv_buf_size": 2097152, 00:32:06.634 "send_buf_size": 2097152, 00:32:06.634 "enable_recv_pipe": true, 00:32:06.634 "enable_quickack": false, 00:32:06.634 "enable_placement_id": 0, 00:32:06.634 "enable_zerocopy_send_server": true, 00:32:06.634 "enable_zerocopy_send_client": false, 00:32:06.634 "zerocopy_threshold": 0, 00:32:06.634 "tls_version": 0, 00:32:06.634 "enable_ktls": false 00:32:06.634 } 00:32:06.634 }, 00:32:06.634 { 00:32:06.634 "method": "sock_impl_set_options", 00:32:06.634 "params": { 00:32:06.634 "impl_name": "ssl", 00:32:06.634 "recv_buf_size": 4096, 00:32:06.634 "send_buf_size": 4096, 00:32:06.634 "enable_recv_pipe": true, 00:32:06.634 "enable_quickack": false, 00:32:06.634 "enable_placement_id": 0, 00:32:06.634 "enable_zerocopy_send_server": true, 00:32:06.634 "enable_zerocopy_send_client": false, 00:32:06.634 "zerocopy_threshold": 0, 00:32:06.634 "tls_version": 0, 00:32:06.634 "enable_ktls": false 00:32:06.634 } 00:32:06.634 } 00:32:06.634 ] 00:32:06.634 }, 00:32:06.634 { 00:32:06.634 "subsystem": "vmd", 00:32:06.634 "config": [] 00:32:06.634 }, 00:32:06.634 { 00:32:06.634 "subsystem": "accel", 00:32:06.634 "config": [ 00:32:06.634 { 00:32:06.634 "method": "accel_set_options", 00:32:06.634 "params": { 00:32:06.634 "small_cache_size": 128, 00:32:06.634 "large_cache_size": 16, 00:32:06.634 "task_count": 2048, 00:32:06.634 "sequence_count": 2048, 00:32:06.634 "buf_count": 2048 00:32:06.634 } 00:32:06.634 } 00:32:06.634 ] 00:32:06.634 }, 00:32:06.634 { 00:32:06.634 "subsystem": "bdev", 00:32:06.634 "config": [ 00:32:06.634 { 00:32:06.634 "method": "bdev_set_options", 00:32:06.634 "params": { 00:32:06.634 "bdev_io_pool_size": 65535, 00:32:06.634 "bdev_io_cache_size": 256, 00:32:06.634 "bdev_auto_examine": true, 00:32:06.634 "iobuf_small_cache_size": 128, 00:32:06.634 "iobuf_large_cache_size": 16 00:32:06.634 } 00:32:06.634 }, 00:32:06.634 { 00:32:06.634 "method": "bdev_raid_set_options", 00:32:06.634 "params": { 00:32:06.634 "process_window_size_kb": 1024 00:32:06.634 } 00:32:06.634 }, 00:32:06.634 { 00:32:06.634 "method": "bdev_iscsi_set_options", 00:32:06.634 "params": { 00:32:06.634 "timeout_sec": 30 00:32:06.634 } 00:32:06.634 }, 00:32:06.634 { 00:32:06.634 "method": "bdev_nvme_set_options", 00:32:06.634 "params": { 00:32:06.634 "action_on_timeout": "none", 00:32:06.634 "timeout_us": 0, 00:32:06.634 "timeout_admin_us": 0, 00:32:06.634 "keep_alive_timeout_ms": 10000, 00:32:06.634 "arbitration_burst": 0, 00:32:06.634 "low_priority_weight": 0, 00:32:06.634 "medium_priority_weight": 0, 00:32:06.634 "high_priority_weight": 0, 00:32:06.634 "nvme_adminq_poll_period_us": 10000, 00:32:06.634 "nvme_ioq_poll_period_us": 0, 00:32:06.634 "io_queue_requests": 512, 00:32:06.634 "delay_cmd_submit": true, 00:32:06.634 "transport_retry_count": 4, 00:32:06.634 "bdev_retry_count": 3, 00:32:06.634 "transport_ack_timeout": 0, 00:32:06.634 "ctrlr_loss_timeout_sec": 0, 00:32:06.634 "reconnect_delay_sec": 0, 00:32:06.634 "fast_io_fail_timeout_sec": 0, 00:32:06.634 "disable_auto_failback": false, 00:32:06.634 "generate_uuids": false, 00:32:06.634 "transport_tos": 0, 00:32:06.634 "nvme_error_stat": false, 00:32:06.634 "rdma_srq_size": 0, 00:32:06.634 "io_path_stat": false, 00:32:06.634 "allow_accel_sequence": false, 00:32:06.634 "rdma_max_cq_size": 0, 00:32:06.634 "rdma_cm_event_timeout_ms": 0, 00:32:06.634 "dhchap_digests": [ 00:32:06.634 "sha256", 00:32:06.634 "sha384", 00:32:06.634 "sha512" 00:32:06.634 ], 00:32:06.634 "dhchap_dhgroups": [ 00:32:06.634 "null", 00:32:06.634 "ffdhe2048", 00:32:06.634 "ffdhe3072", 00:32:06.634 "ffdhe4096", 00:32:06.634 "ffdhe6144", 00:32:06.634 "ffdhe8192" 00:32:06.634 ] 00:32:06.634 } 00:32:06.634 }, 00:32:06.634 { 00:32:06.634 "method": "bdev_nvme_attach_controller", 00:32:06.634 "params": { 00:32:06.634 "name": "nvme0", 00:32:06.634 "trtype": "TCP", 00:32:06.634 "adrfam": "IPv4", 00:32:06.634 "traddr": "127.0.0.1", 00:32:06.634 "trsvcid": "4420", 00:32:06.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:06.634 "prchk_reftag": false, 00:32:06.634 "prchk_guard": false, 00:32:06.634 "ctrlr_loss_timeout_sec": 0, 00:32:06.634 "reconnect_delay_sec": 0, 00:32:06.634 "fast_io_fail_timeout_sec": 0, 00:32:06.634 "psk": "key0", 00:32:06.634 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:06.634 "hdgst": false, 00:32:06.634 "ddgst": false 00:32:06.634 } 00:32:06.634 }, 00:32:06.634 { 00:32:06.634 "method": "bdev_nvme_set_hotplug", 00:32:06.634 "params": { 00:32:06.634 "period_us": 100000, 00:32:06.634 "enable": false 00:32:06.634 } 00:32:06.634 }, 00:32:06.634 { 00:32:06.634 "method": "bdev_wait_for_examine" 00:32:06.634 } 00:32:06.634 ] 00:32:06.634 }, 00:32:06.634 { 00:32:06.634 "subsystem": "nbd", 00:32:06.634 "config": [] 00:32:06.634 } 00:32:06.634 ] 00:32:06.634 }' 00:32:06.634 08:43:53 keyring_file -- keyring/file.sh@114 -- # killprocess 493973 00:32:06.634 08:43:53 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 493973 ']' 00:32:06.634 08:43:53 keyring_file -- common/autotest_common.sh@950 -- # kill -0 493973 00:32:06.634 08:43:53 keyring_file -- common/autotest_common.sh@951 -- # uname 00:32:06.634 08:43:53 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:06.634 08:43:53 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 493973 00:32:06.634 08:43:53 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:06.634 08:43:53 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:06.634 08:43:53 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 493973' 00:32:06.634 killing process with pid 493973 00:32:06.634 08:43:53 keyring_file -- common/autotest_common.sh@965 -- # kill 493973 00:32:06.634 Received shutdown signal, test time was about 1.000000 seconds 00:32:06.634 00:32:06.634 Latency(us) 00:32:06.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.634 =================================================================================================================== 00:32:06.634 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:06.634 08:43:53 keyring_file -- common/autotest_common.sh@970 -- # wait 493973 00:32:06.893 08:43:53 keyring_file -- keyring/file.sh@117 -- # bperfpid=495508 00:32:06.893 08:43:53 keyring_file -- keyring/file.sh@119 -- # waitforlisten 495508 /var/tmp/bperf.sock 00:32:06.893 08:43:53 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 495508 ']' 00:32:06.893 08:43:53 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:06.893 08:43:53 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:06.893 08:43:53 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:06.893 "subsystems": [ 00:32:06.893 { 00:32:06.893 "subsystem": "keyring", 00:32:06.893 "config": [ 00:32:06.893 { 00:32:06.893 "method": "keyring_file_add_key", 00:32:06.893 "params": { 00:32:06.893 "name": "key0", 00:32:06.893 "path": "/tmp/tmp.u5xzEJooA1" 00:32:06.893 } 00:32:06.893 }, 00:32:06.893 { 00:32:06.893 "method": "keyring_file_add_key", 00:32:06.894 "params": { 00:32:06.894 "name": "key1", 00:32:06.894 "path": "/tmp/tmp.M0QdxvUoLq" 00:32:06.894 } 00:32:06.894 } 00:32:06.894 ] 00:32:06.894 }, 00:32:06.894 { 00:32:06.894 "subsystem": "iobuf", 00:32:06.894 "config": [ 00:32:06.894 { 00:32:06.894 "method": "iobuf_set_options", 00:32:06.894 "params": { 00:32:06.894 "small_pool_count": 8192, 00:32:06.894 "large_pool_count": 1024, 00:32:06.894 "small_bufsize": 8192, 00:32:06.894 "large_bufsize": 135168 00:32:06.894 } 00:32:06.894 } 00:32:06.894 ] 00:32:06.894 }, 00:32:06.894 { 00:32:06.894 "subsystem": "sock", 00:32:06.894 "config": [ 00:32:06.894 { 00:32:06.894 "method": "sock_impl_set_options", 00:32:06.894 "params": { 00:32:06.894 "impl_name": "posix", 00:32:06.894 "recv_buf_size": 2097152, 00:32:06.894 "send_buf_size": 2097152, 00:32:06.894 "enable_recv_pipe": true, 00:32:06.894 "enable_quickack": false, 00:32:06.894 "enable_placement_id": 0, 00:32:06.894 "enable_zerocopy_send_server": true, 00:32:06.894 "enable_zerocopy_send_client": false, 00:32:06.894 "zerocopy_threshold": 0, 00:32:06.894 "tls_version": 0, 00:32:06.894 "enable_ktls": false 00:32:06.894 } 00:32:06.894 }, 00:32:06.894 { 00:32:06.894 "method": "sock_impl_set_options", 00:32:06.894 "params": { 00:32:06.894 "impl_name": "ssl", 00:32:06.894 "recv_buf_size": 4096, 00:32:06.894 "send_buf_size": 4096, 00:32:06.894 "enable_recv_pipe": true, 00:32:06.894 "enable_quickack": false, 00:32:06.894 "enable_placement_id": 0, 00:32:06.894 "enable_zerocopy_send_server": true, 00:32:06.894 "enable_zerocopy_send_client": false, 00:32:06.894 "zerocopy_threshold": 0, 00:32:06.894 "tls_version": 0, 00:32:06.894 "enable_ktls": false 00:32:06.894 } 00:32:06.894 } 00:32:06.894 ] 00:32:06.894 }, 00:32:06.894 { 00:32:06.894 "subsystem": "vmd", 00:32:06.894 "config": [] 00:32:06.894 }, 00:32:06.894 { 00:32:06.894 "subsystem": "accel", 00:32:06.894 "config": [ 00:32:06.894 { 00:32:06.894 "method": "accel_set_options", 00:32:06.894 "params": { 00:32:06.894 "small_cache_size": 128, 00:32:06.894 "large_cache_size": 16, 00:32:06.894 "task_count": 2048, 00:32:06.894 "sequence_count": 2048, 00:32:06.894 "buf_count": 2048 00:32:06.894 } 00:32:06.894 } 00:32:06.894 ] 00:32:06.894 }, 00:32:06.894 { 00:32:06.894 "subsystem": "bdev", 00:32:06.894 "config": [ 00:32:06.894 { 00:32:06.894 "method": "bdev_set_options", 00:32:06.894 "params": { 00:32:06.894 "bdev_io_pool_size": 65535, 00:32:06.894 "bdev_io_cache_size": 256, 00:32:06.894 "bdev_auto_examine": true, 00:32:06.894 "iobuf_small_cache_size": 128, 00:32:06.894 "iobuf_large_cache_size": 16 00:32:06.894 } 00:32:06.894 }, 00:32:06.894 { 00:32:06.894 "method": "bdev_raid_set_options", 00:32:06.894 "params": { 00:32:06.894 "process_window_size_kb": 1024 00:32:06.894 } 00:32:06.894 }, 00:32:06.894 { 00:32:06.894 "method": "bdev_iscsi_set_options", 00:32:06.894 "params": { 00:32:06.894 "timeout_sec": 30 00:32:06.894 } 00:32:06.894 }, 00:32:06.894 { 00:32:06.894 "method": "bdev_nvme_set_options", 00:32:06.894 "params": { 00:32:06.894 "action_on_timeout": "none", 00:32:06.894 "timeout_us": 0, 00:32:06.894 "timeout_admin_us": 0, 00:32:06.894 "keep_alive_timeout_ms": 10000, 00:32:06.894 "arbitration_burst": 0, 00:32:06.894 "low_priority_weight": 0, 00:32:06.894 "medium_priority_weight": 0, 00:32:06.894 "high_priority_weight": 0, 00:32:06.894 "nvme_adminq_poll_period_us": 10000, 00:32:06.894 "nvme_ioq_poll_period_us": 0, 00:32:06.894 "io_queue_requests": 512, 00:32:06.894 "delay_cmd_submit": true, 00:32:06.894 "transport_retry_count": 4, 00:32:06.894 "bdev_retry_count": 3, 00:32:06.894 "transport_ack_timeout": 0, 00:32:06.894 "ctrlr_loss_timeout_sec": 0, 00:32:06.894 "reconnect_delay_sec": 0, 00:32:06.894 "fast_io_fail_timeout_sec": 0, 00:32:06.894 "disable_auto_failback": false, 00:32:06.894 "generate_uuids": false, 00:32:06.894 "transport_tos": 0, 00:32:06.894 "nvme_error_stat": false, 00:32:06.894 "rdma_srq_size": 0, 00:32:06.894 "io_path_stat": false, 00:32:06.894 "allow_accel_sequence": false, 00:32:06.894 "rdma_max_cq_size": 0, 00:32:06.894 "rdma_cm_event_timeout_ms": 0, 00:32:06.894 "dhchap_digests": [ 00:32:06.894 "sha256", 00:32:06.894 "sha384", 00:32:06.894 "sha512" 00:32:06.894 ], 00:32:06.894 "dhchap_dhgroups": [ 00:32:06.894 "null", 00:32:06.894 "ffdhe2048", 00:32:06.894 "ffdhe3072", 00:32:06.894 "ffdhe4096", 00:32:06.894 "ffdhe6144", 00:32:06.894 "ffdhe8192" 00:32:06.894 ] 00:32:06.894 } 00:32:06.894 }, 00:32:06.894 { 00:32:06.894 "method": "bdev_nvme_attach_controller", 00:32:06.894 "params": { 00:32:06.894 "name": "nvme0", 00:32:06.894 "trtype": "TCP", 00:32:06.894 "adrfam": "IPv4", 00:32:06.894 "traddr": "127.0.0.1", 00:32:06.894 "trsvcid": "4420", 00:32:06.894 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:06.894 "prchk_reftag": false, 00:32:06.894 "prchk_guard": false, 00:32:06.894 "ctrlr_loss_timeout_sec": 0, 00:32:06.894 "reconnect_delay_sec": 0, 00:32:06.894 "fast_io_fail_timeout_sec": 0, 00:32:06.894 "psk": "key0", 00:32:06.894 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:06.894 "hdgst": false, 00:32:06.894 "ddgst": false 00:32:06.894 } 00:32:06.894 }, 00:32:06.894 { 00:32:06.894 "method": "bdev_nvme_set_hotplug", 00:32:06.894 "params": { 00:32:06.894 "period_us": 100000, 00:32:06.894 "enable": false 00:32:06.894 } 00:32:06.894 }, 00:32:06.894 { 00:32:06.894 "method": "bdev_wait_for_examine" 00:32:06.894 } 00:32:06.894 ] 00:32:06.894 }, 00:32:06.894 { 00:32:06.894 "subsystem": "nbd", 00:32:06.894 "config": [] 00:32:06.894 } 00:32:06.894 ] 00:32:06.894 }' 00:32:06.894 08:43:53 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:06.894 08:43:53 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:06.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:06.894 08:43:53 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:06.894 08:43:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:06.894 [2024-05-15 08:43:53.903841] Starting SPDK v24.05-pre git sha1 f0bf11db4 / DPDK 23.11.0 initialization... 00:32:06.894 [2024-05-15 08:43:53.903889] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid495508 ] 00:32:07.153 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.153 [2024-05-15 08:43:53.958427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.153 [2024-05-15 08:43:54.027638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.411 [2024-05-15 08:43:54.177812] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:07.977 08:43:54 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:07.977 08:43:54 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:32:07.977 08:43:54 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:07.977 08:43:54 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:07.977 08:43:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:07.977 08:43:54 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:07.977 08:43:54 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:07.977 08:43:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:07.977 08:43:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:07.977 08:43:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:07.977 08:43:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:07.977 08:43:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:08.235 08:43:55 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:08.235 08:43:55 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:08.235 08:43:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:08.235 08:43:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:08.235 08:43:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:08.235 08:43:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:08.235 08:43:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:08.235 08:43:55 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:08.235 08:43:55 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:08.235 08:43:55 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:08.235 08:43:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:08.494 08:43:55 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:08.494 08:43:55 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:08.494 08:43:55 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.u5xzEJooA1 /tmp/tmp.M0QdxvUoLq 00:32:08.494 08:43:55 keyring_file -- keyring/file.sh@20 -- # killprocess 495508 00:32:08.494 08:43:55 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 495508 ']' 00:32:08.494 08:43:55 keyring_file -- common/autotest_common.sh@950 -- # kill -0 495508 00:32:08.494 08:43:55 keyring_file -- common/autotest_common.sh@951 -- # uname 00:32:08.494 08:43:55 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:08.494 08:43:55 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 495508 00:32:08.494 08:43:55 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:08.494 08:43:55 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:08.494 08:43:55 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 495508' 00:32:08.494 killing process with pid 495508 00:32:08.494 08:43:55 keyring_file -- common/autotest_common.sh@965 -- # kill 495508 00:32:08.494 Received shutdown signal, test time was about 1.000000 seconds 00:32:08.494 00:32:08.494 Latency(us) 00:32:08.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.494 =================================================================================================================== 00:32:08.494 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:08.494 08:43:55 keyring_file -- common/autotest_common.sh@970 -- # wait 495508 00:32:08.752 08:43:55 keyring_file -- keyring/file.sh@21 -- # killprocess 493920 00:32:08.752 08:43:55 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 493920 ']' 00:32:08.752 08:43:55 keyring_file -- common/autotest_common.sh@950 -- # kill -0 493920 00:32:08.752 08:43:55 keyring_file -- common/autotest_common.sh@951 -- # uname 00:32:08.752 08:43:55 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:08.752 08:43:55 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 493920 00:32:08.752 08:43:55 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:08.752 08:43:55 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:08.752 08:43:55 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 493920' 00:32:08.752 killing process with pid 493920 00:32:08.752 08:43:55 keyring_file -- common/autotest_common.sh@965 -- # kill 493920 00:32:08.752 [2024-05-15 08:43:55.720304] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:08.752 [2024-05-15 08:43:55.720340] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:08.752 08:43:55 keyring_file -- common/autotest_common.sh@970 -- # wait 493920 00:32:09.319 00:32:09.319 real 0m12.088s 00:32:09.319 user 0m29.067s 00:32:09.319 sys 0m2.644s 00:32:09.319 08:43:56 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:09.319 08:43:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:09.319 ************************************ 00:32:09.319 END TEST keyring_file 00:32:09.319 ************************************ 00:32:09.319 08:43:56 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:32:09.319 08:43:56 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:32:09.319 08:43:56 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:09.319 08:43:56 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:09.319 08:43:56 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:32:09.319 08:43:56 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:32:09.319 08:43:56 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:32:09.319 08:43:56 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:09.319 08:43:56 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:09.319 08:43:56 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:09.319 08:43:56 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:32:09.319 08:43:56 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:09.319 08:43:56 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:32:09.319 08:43:56 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:09.319 08:43:56 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:09.319 08:43:56 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:09.319 08:43:56 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:32:09.319 08:43:56 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:32:09.319 08:43:56 -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:09.319 08:43:56 -- common/autotest_common.sh@10 -- # set +x 00:32:09.319 08:43:56 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:32:09.319 08:43:56 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:32:09.319 08:43:56 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:32:09.319 08:43:56 -- common/autotest_common.sh@10 -- # set +x 00:32:13.506 INFO: APP EXITING 00:32:13.506 INFO: killing all VMs 00:32:13.506 INFO: killing vhost app 00:32:13.506 INFO: EXIT DONE 00:32:15.409 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:32:15.409 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:32:15.409 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:32:15.409 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:32:15.409 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:32:15.409 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:32:15.409 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:32:15.409 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:32:15.409 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:32:15.409 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:32:15.409 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:32:15.669 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:32:15.669 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:32:15.669 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:32:15.669 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:32:15.669 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:32:15.669 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:32:18.212 Cleaning 00:32:18.213 Removing: /var/run/dpdk/spdk0/config 00:32:18.213 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:18.213 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:18.213 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:18.213 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:18.213 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:18.213 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:18.213 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:18.213 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:18.213 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:18.213 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:18.213 Removing: /var/run/dpdk/spdk1/config 00:32:18.213 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:18.213 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:18.213 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:18.213 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:18.213 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:18.213 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:18.213 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:18.213 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:18.213 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:18.213 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:18.213 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:18.213 Removing: /var/run/dpdk/spdk2/config 00:32:18.213 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:18.213 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:18.213 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:18.213 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:18.213 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:18.213 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:18.213 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:18.213 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:18.213 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:18.213 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:18.213 Removing: /var/run/dpdk/spdk3/config 00:32:18.213 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:18.213 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:18.213 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:18.213 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:18.213 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:18.213 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:18.213 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:18.213 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:18.213 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:18.213 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:18.213 Removing: /var/run/dpdk/spdk4/config 00:32:18.213 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:18.213 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:18.213 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:18.213 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:18.213 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:18.213 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:18.213 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:18.213 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:18.213 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:18.213 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:18.213 Removing: /dev/shm/bdev_svc_trace.1 00:32:18.213 Removing: /dev/shm/nvmf_trace.0 00:32:18.213 Removing: /dev/shm/spdk_tgt_trace.pid134553 00:32:18.213 Removing: /var/run/dpdk/spdk0 00:32:18.213 Removing: /var/run/dpdk/spdk1 00:32:18.213 Removing: /var/run/dpdk/spdk2 00:32:18.213 Removing: /var/run/dpdk/spdk3 00:32:18.213 Removing: /var/run/dpdk/spdk4 00:32:18.213 Removing: /var/run/dpdk/spdk_pid132423 00:32:18.213 Removing: /var/run/dpdk/spdk_pid133480 00:32:18.213 Removing: /var/run/dpdk/spdk_pid134553 00:32:18.471 Removing: /var/run/dpdk/spdk_pid135184 00:32:18.471 Removing: /var/run/dpdk/spdk_pid136132 00:32:18.471 Removing: /var/run/dpdk/spdk_pid136374 00:32:18.471 Removing: /var/run/dpdk/spdk_pid137345 00:32:18.471 Removing: /var/run/dpdk/spdk_pid137578 00:32:18.471 Removing: /var/run/dpdk/spdk_pid137914 00:32:18.471 Removing: /var/run/dpdk/spdk_pid139426 00:32:18.471 Removing: /var/run/dpdk/spdk_pid140541 00:32:18.471 Removing: /var/run/dpdk/spdk_pid140902 00:32:18.471 Removing: /var/run/dpdk/spdk_pid141250 00:32:18.471 Removing: /var/run/dpdk/spdk_pid141551 00:32:18.471 Removing: /var/run/dpdk/spdk_pid141841 00:32:18.471 Removing: /var/run/dpdk/spdk_pid142116 00:32:18.471 Removing: /var/run/dpdk/spdk_pid142455 00:32:18.471 Removing: /var/run/dpdk/spdk_pid142754 00:32:18.471 Removing: /var/run/dpdk/spdk_pid144117 00:32:18.471 Removing: /var/run/dpdk/spdk_pid147110 00:32:18.471 Removing: /var/run/dpdk/spdk_pid147367 00:32:18.471 Removing: /var/run/dpdk/spdk_pid147751 00:32:18.471 Removing: /var/run/dpdk/spdk_pid147861 00:32:18.471 Removing: /var/run/dpdk/spdk_pid148352 00:32:18.471 Removing: /var/run/dpdk/spdk_pid148536 00:32:18.471 Removing: /var/run/dpdk/spdk_pid148859 00:32:18.471 Removing: /var/run/dpdk/spdk_pid149084 00:32:18.471 Removing: /var/run/dpdk/spdk_pid149352 00:32:18.471 Removing: /var/run/dpdk/spdk_pid149574 00:32:18.471 Removing: /var/run/dpdk/spdk_pid149684 00:32:18.471 Removing: /var/run/dpdk/spdk_pid149855 00:32:18.471 Removing: /var/run/dpdk/spdk_pid150412 00:32:18.471 Removing: /var/run/dpdk/spdk_pid150658 00:32:18.471 Removing: /var/run/dpdk/spdk_pid150946 00:32:18.471 Removing: /var/run/dpdk/spdk_pid151219 00:32:18.471 Removing: /var/run/dpdk/spdk_pid151240 00:32:18.471 Removing: /var/run/dpdk/spdk_pid151491 00:32:18.471 Removing: /var/run/dpdk/spdk_pid151771 00:32:18.471 Removing: /var/run/dpdk/spdk_pid152025 00:32:18.471 Removing: /var/run/dpdk/spdk_pid152275 00:32:18.471 Removing: /var/run/dpdk/spdk_pid152524 00:32:18.471 Removing: /var/run/dpdk/spdk_pid152777 00:32:18.471 Removing: /var/run/dpdk/spdk_pid153026 00:32:18.471 Removing: /var/run/dpdk/spdk_pid153279 00:32:18.471 Removing: /var/run/dpdk/spdk_pid153524 00:32:18.471 Removing: /var/run/dpdk/spdk_pid153774 00:32:18.471 Removing: /var/run/dpdk/spdk_pid154025 00:32:18.471 Removing: /var/run/dpdk/spdk_pid154272 00:32:18.471 Removing: /var/run/dpdk/spdk_pid154530 00:32:18.471 Removing: /var/run/dpdk/spdk_pid154778 00:32:18.471 Removing: /var/run/dpdk/spdk_pid155023 00:32:18.471 Removing: /var/run/dpdk/spdk_pid155278 00:32:18.471 Removing: /var/run/dpdk/spdk_pid155524 00:32:18.471 Removing: /var/run/dpdk/spdk_pid155779 00:32:18.471 Removing: /var/run/dpdk/spdk_pid156030 00:32:18.471 Removing: /var/run/dpdk/spdk_pid156280 00:32:18.471 Removing: /var/run/dpdk/spdk_pid156537 00:32:18.471 Removing: /var/run/dpdk/spdk_pid156819 00:32:18.471 Removing: /var/run/dpdk/spdk_pid157126 00:32:18.471 Removing: /var/run/dpdk/spdk_pid160769 00:32:18.471 Removing: /var/run/dpdk/spdk_pid204863 00:32:18.471 Removing: /var/run/dpdk/spdk_pid209111 00:32:18.471 Removing: /var/run/dpdk/spdk_pid218942 00:32:18.471 Removing: /var/run/dpdk/spdk_pid224348 00:32:18.471 Removing: /var/run/dpdk/spdk_pid228325 00:32:18.471 Removing: /var/run/dpdk/spdk_pid228841 00:32:18.471 Removing: /var/run/dpdk/spdk_pid241038 00:32:18.471 Removing: /var/run/dpdk/spdk_pid241040 00:32:18.471 Removing: /var/run/dpdk/spdk_pid241955 00:32:18.471 Removing: /var/run/dpdk/spdk_pid242845 00:32:18.471 Removing: /var/run/dpdk/spdk_pid243608 00:32:18.471 Removing: /var/run/dpdk/spdk_pid244262 00:32:18.471 Removing: /var/run/dpdk/spdk_pid244276 00:32:18.471 Removing: /var/run/dpdk/spdk_pid244504 00:32:18.471 Removing: /var/run/dpdk/spdk_pid244732 00:32:18.471 Removing: /var/run/dpdk/spdk_pid244738 00:32:18.471 Removing: /var/run/dpdk/spdk_pid245650 00:32:18.471 Removing: /var/run/dpdk/spdk_pid246455 00:32:18.471 Removing: /var/run/dpdk/spdk_pid247275 00:32:18.730 Removing: /var/run/dpdk/spdk_pid247952 00:32:18.730 Removing: /var/run/dpdk/spdk_pid247954 00:32:18.730 Removing: /var/run/dpdk/spdk_pid248190 00:32:18.730 Removing: /var/run/dpdk/spdk_pid249436 00:32:18.730 Removing: /var/run/dpdk/spdk_pid250610 00:32:18.730 Removing: /var/run/dpdk/spdk_pid258758 00:32:18.730 Removing: /var/run/dpdk/spdk_pid259210 00:32:18.730 Removing: /var/run/dpdk/spdk_pid263452 00:32:18.730 Removing: /var/run/dpdk/spdk_pid269096 00:32:18.730 Removing: /var/run/dpdk/spdk_pid271866 00:32:18.730 Removing: /var/run/dpdk/spdk_pid282681 00:32:18.730 Removing: /var/run/dpdk/spdk_pid291738 00:32:18.730 Removing: /var/run/dpdk/spdk_pid293412 00:32:18.730 Removing: /var/run/dpdk/spdk_pid294336 00:32:18.730 Removing: /var/run/dpdk/spdk_pid310902 00:32:18.730 Removing: /var/run/dpdk/spdk_pid314737 00:32:18.730 Removing: /var/run/dpdk/spdk_pid319185 00:32:18.730 Removing: /var/run/dpdk/spdk_pid320789 00:32:18.730 Removing: /var/run/dpdk/spdk_pid322595 00:32:18.730 Removing: /var/run/dpdk/spdk_pid322754 00:32:18.730 Removing: /var/run/dpdk/spdk_pid322920 00:32:18.730 Removing: /var/run/dpdk/spdk_pid323115 00:32:18.730 Removing: /var/run/dpdk/spdk_pid323685 00:32:18.730 Removing: /var/run/dpdk/spdk_pid325457 00:32:18.730 Removing: /var/run/dpdk/spdk_pid326445 00:32:18.730 Removing: /var/run/dpdk/spdk_pid327059 00:32:18.730 Removing: /var/run/dpdk/spdk_pid329777 00:32:18.730 Removing: /var/run/dpdk/spdk_pid330363 00:32:18.730 Removing: /var/run/dpdk/spdk_pid331014 00:32:18.730 Removing: /var/run/dpdk/spdk_pid335104 00:32:18.730 Removing: /var/run/dpdk/spdk_pid345433 00:32:18.730 Removing: /var/run/dpdk/spdk_pid349442 00:32:18.730 Removing: /var/run/dpdk/spdk_pid355457 00:32:18.730 Removing: /var/run/dpdk/spdk_pid356842 00:32:18.730 Removing: /var/run/dpdk/spdk_pid358324 00:32:18.730 Removing: /var/run/dpdk/spdk_pid362631 00:32:18.730 Removing: /var/run/dpdk/spdk_pid366854 00:32:18.730 Removing: /var/run/dpdk/spdk_pid374203 00:32:18.730 Removing: /var/run/dpdk/spdk_pid374205 00:32:18.730 Removing: /var/run/dpdk/spdk_pid379310 00:32:18.730 Removing: /var/run/dpdk/spdk_pid379444 00:32:18.730 Removing: /var/run/dpdk/spdk_pid379673 00:32:18.730 Removing: /var/run/dpdk/spdk_pid380125 00:32:18.730 Removing: /var/run/dpdk/spdk_pid380149 00:32:18.730 Removing: /var/run/dpdk/spdk_pid384570 00:32:18.730 Removing: /var/run/dpdk/spdk_pid385030 00:32:18.730 Removing: /var/run/dpdk/spdk_pid389289 00:32:18.730 Removing: /var/run/dpdk/spdk_pid392077 00:32:18.730 Removing: /var/run/dpdk/spdk_pid397617 00:32:18.730 Removing: /var/run/dpdk/spdk_pid403009 00:32:18.730 Removing: /var/run/dpdk/spdk_pid411556 00:32:18.730 Removing: /var/run/dpdk/spdk_pid418529 00:32:18.730 Removing: /var/run/dpdk/spdk_pid418531 00:32:18.730 Removing: /var/run/dpdk/spdk_pid438039 00:32:18.730 Removing: /var/run/dpdk/spdk_pid438635 00:32:18.730 Removing: /var/run/dpdk/spdk_pid439230 00:32:18.730 Removing: /var/run/dpdk/spdk_pid439927 00:32:18.730 Removing: /var/run/dpdk/spdk_pid440899 00:32:18.730 Removing: /var/run/dpdk/spdk_pid441499 00:32:18.730 Removing: /var/run/dpdk/spdk_pid442082 00:32:18.730 Removing: /var/run/dpdk/spdk_pid442779 00:32:18.730 Removing: /var/run/dpdk/spdk_pid447026 00:32:18.730 Removing: /var/run/dpdk/spdk_pid447265 00:32:18.730 Removing: /var/run/dpdk/spdk_pid453314 00:32:18.730 Removing: /var/run/dpdk/spdk_pid453536 00:32:18.730 Removing: /var/run/dpdk/spdk_pid455805 00:32:18.730 Removing: /var/run/dpdk/spdk_pid463310 00:32:18.730 Removing: /var/run/dpdk/spdk_pid463321 00:32:18.730 Removing: /var/run/dpdk/spdk_pid468369 00:32:18.730 Removing: /var/run/dpdk/spdk_pid470434 00:32:18.730 Removing: /var/run/dpdk/spdk_pid472803 00:32:18.730 Removing: /var/run/dpdk/spdk_pid474061 00:32:18.730 Removing: /var/run/dpdk/spdk_pid476035 00:32:18.730 Removing: /var/run/dpdk/spdk_pid477209 00:32:18.730 Removing: /var/run/dpdk/spdk_pid485813 00:32:18.730 Removing: /var/run/dpdk/spdk_pid486275 00:32:18.988 Removing: /var/run/dpdk/spdk_pid486951 00:32:18.988 Removing: /var/run/dpdk/spdk_pid489209 00:32:18.988 Removing: /var/run/dpdk/spdk_pid489677 00:32:18.988 Removing: /var/run/dpdk/spdk_pid490142 00:32:18.988 Removing: /var/run/dpdk/spdk_pid493920 00:32:18.988 Removing: /var/run/dpdk/spdk_pid493973 00:32:18.988 Removing: /var/run/dpdk/spdk_pid495508 00:32:18.988 Clean 00:32:18.988 08:44:05 -- common/autotest_common.sh@1447 -- # return 0 00:32:18.988 08:44:05 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:32:18.988 08:44:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:18.988 08:44:05 -- common/autotest_common.sh@10 -- # set +x 00:32:18.988 08:44:05 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:32:18.988 08:44:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:18.988 08:44:05 -- common/autotest_common.sh@10 -- # set +x 00:32:18.988 08:44:05 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:18.988 08:44:05 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:18.988 08:44:05 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:18.988 08:44:05 -- spdk/autotest.sh@387 -- # hash lcov 00:32:18.988 08:44:05 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:18.988 08:44:05 -- spdk/autotest.sh@389 -- # hostname 00:32:18.988 08:44:05 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:19.247 geninfo: WARNING: invalid characters removed from testname! 00:32:41.169 08:44:24 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:41.169 08:44:27 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:42.544 08:44:29 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:43.919 08:44:30 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:45.820 08:44:32 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:47.719 08:44:34 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:49.622 08:44:36 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:49.622 08:44:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:49.622 08:44:36 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:49.622 08:44:36 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.622 08:44:36 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.622 08:44:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.622 08:44:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.622 08:44:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.622 08:44:36 -- paths/export.sh@5 -- $ export PATH 00:32:49.622 08:44:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.622 08:44:36 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:32:49.622 08:44:36 -- common/autobuild_common.sh@437 -- $ date +%s 00:32:49.622 08:44:36 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715755476.XXXXXX 00:32:49.622 08:44:36 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715755476.axzSgy 00:32:49.622 08:44:36 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:32:49.622 08:44:36 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:32:49.622 08:44:36 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:32:49.622 08:44:36 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:32:49.622 08:44:36 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:32:49.622 08:44:36 -- common/autobuild_common.sh@453 -- $ get_config_params 00:32:49.622 08:44:36 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:32:49.622 08:44:36 -- common/autotest_common.sh@10 -- $ set +x 00:32:49.622 08:44:36 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:32:49.622 08:44:36 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:32:49.622 08:44:36 -- pm/common@17 -- $ local monitor 00:32:49.622 08:44:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:49.622 08:44:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:49.622 08:44:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:49.622 08:44:36 -- pm/common@21 -- $ date +%s 00:32:49.622 08:44:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:49.622 08:44:36 -- pm/common@21 -- $ date +%s 00:32:49.622 08:44:36 -- pm/common@25 -- $ sleep 1 00:32:49.622 08:44:36 -- pm/common@21 -- $ date +%s 00:32:49.622 08:44:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715755476 00:32:49.622 08:44:36 -- pm/common@21 -- $ date +%s 00:32:49.622 08:44:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715755476 00:32:49.622 08:44:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715755476 00:32:49.622 08:44:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715755476 00:32:49.622 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715755476_collect-vmstat.pm.log 00:32:49.622 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715755476_collect-cpu-load.pm.log 00:32:49.622 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715755476_collect-cpu-temp.pm.log 00:32:49.622 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715755476_collect-bmc-pm.bmc.pm.log 00:32:50.560 08:44:37 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:32:50.560 08:44:37 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:32:50.560 08:44:37 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:50.560 08:44:37 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:50.560 08:44:37 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:50.560 08:44:37 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:50.560 08:44:37 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:50.560 08:44:37 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:50.560 08:44:37 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:50.560 08:44:37 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:50.560 08:44:37 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:50.560 08:44:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:50.560 08:44:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:50.560 08:44:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:50.560 08:44:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:32:50.560 08:44:37 -- pm/common@44 -- $ pid=504992 00:32:50.560 08:44:37 -- pm/common@50 -- $ kill -TERM 504992 00:32:50.560 08:44:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:50.560 08:44:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:32:50.560 08:44:37 -- pm/common@44 -- $ pid=504994 00:32:50.560 08:44:37 -- pm/common@50 -- $ kill -TERM 504994 00:32:50.560 08:44:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:50.560 08:44:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:32:50.560 08:44:37 -- pm/common@44 -- $ pid=504996 00:32:50.560 08:44:37 -- pm/common@50 -- $ kill -TERM 504996 00:32:50.560 08:44:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:50.560 08:44:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:32:50.560 08:44:37 -- pm/common@44 -- $ pid=505028 00:32:50.560 08:44:37 -- pm/common@50 -- $ sudo -E kill -TERM 505028 00:32:50.560 + [[ -n 27999 ]] 00:32:50.560 + sudo kill 27999 00:32:50.570 [Pipeline] } 00:32:50.590 [Pipeline] // stage 00:32:50.596 [Pipeline] } 00:32:50.613 [Pipeline] // timeout 00:32:50.618 [Pipeline] } 00:32:50.635 [Pipeline] // catchError 00:32:50.641 [Pipeline] } 00:32:50.660 [Pipeline] // wrap 00:32:50.666 [Pipeline] } 00:32:50.683 [Pipeline] // catchError 00:32:50.692 [Pipeline] stage 00:32:50.694 [Pipeline] { (Epilogue) 00:32:50.708 [Pipeline] catchError 00:32:50.710 [Pipeline] { 00:32:50.723 [Pipeline] echo 00:32:50.725 Cleanup processes 00:32:50.731 [Pipeline] sh 00:32:51.019 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:51.019 505111 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:32:51.019 505399 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:51.033 [Pipeline] sh 00:32:51.321 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:51.321 ++ grep -v 'sudo pgrep' 00:32:51.321 ++ awk '{print $1}' 00:32:51.321 + sudo kill -9 505111 00:32:51.335 [Pipeline] sh 00:32:51.622 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:01.614 [Pipeline] sh 00:33:01.905 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:01.905 Artifacts sizes are good 00:33:01.919 [Pipeline] archiveArtifacts 00:33:01.926 Archiving artifacts 00:33:02.373 [Pipeline] sh 00:33:02.657 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:02.674 [Pipeline] cleanWs 00:33:02.684 [WS-CLEANUP] Deleting project workspace... 00:33:02.684 [WS-CLEANUP] Deferred wipeout is used... 00:33:02.691 [WS-CLEANUP] done 00:33:02.693 [Pipeline] } 00:33:02.714 [Pipeline] // catchError 00:33:02.725 [Pipeline] sh 00:33:03.006 + logger -p user.info -t JENKINS-CI 00:33:03.015 [Pipeline] } 00:33:03.030 [Pipeline] // stage 00:33:03.035 [Pipeline] } 00:33:03.055 [Pipeline] // node 00:33:03.061 [Pipeline] End of Pipeline 00:33:03.095 Finished: SUCCESS